Professional Documents
Culture Documents
Microsoft Manual
Microsoft Manual
20487D
Developing Microsoft Azure™ and Web
Services
MCT USE ONLY. STUDENT USE PROHIBITED
ii Developing Microsoft Azure™ and Web Services
Information in this document, including URL and other Internet Web site references, is subject to change
without notice. Unless otherwise noted, the example companies, organizations, products, domain names,
e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with
any real company, organization, product, domain name, e-mail address, logo, person, place or event is
intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the
user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in
or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical,
photocopying, recording, or otherwise), or for any purpose, without the express written permission of
Microsoft Corporation.
Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property
rights covering subject matter in this document. Except as expressly provided in any written license
agreement from Microsoft, the furnishing of this document does not give you any license to these
patents, trademarks, copyrights, or other intellectual property.
The names of manufacturers, products, or URLs are provided for informational purposes only and
Microsoft makes no representations and warranties, either expressed, implied, or statutory, regarding
these manufacturers or the use of the products with any Microsoft technologies. The inclusion of a
manufacturer or product does not imply endorsement of Microsoft of the manufacturer or product. Links
may be provided to third party sites. Such sites are not under the control of Microsoft and Microsoft is not
responsible for the contents of any linked site or any link contained in a linked site, or any changes or
updates to such sites. Microsoft is not responsible for webcasting or any other form of transmission
received from any linked site. Microsoft is providing these links to you only as a convenience, and the
inclusion of any link does not imply endorsement of Microsoft of the site or the products contained
therein.
Released: 01/2019
MCT USE ONLY. STUDENT USE PROHIBITED
MICROSOFT LICENSE TERMS
MICROSOFT INSTRUCTOR-LED COURSEWARE
These license terms are an agreement between Microsoft Corporation (or based on where you live, one of its
affiliates) and you. Please read them. They apply to your use of the content accompanying this agreement which
includes the media on which you received it, if any. These license terms also apply to Trainer Content and any
updates and supplements for the Licensed Content unless other terms accompany those items. If so, those terms
apply.
BY ACCESSING, DOWNLOADING OR USING THE LICENSED CONTENT, YOU ACCEPT THESE TERMS.
IF YOU DO NOT ACCEPT THEM, DO NOT ACCESS, DOWNLOAD OR USE THE LICENSED CONTENT.
If you comply with these license terms, you have the rights below for each license you acquire.
1. DEFINITIONS.
a. “Authorized Learning Center” means a Microsoft IT Academy Program Member, Microsoft Learning
Competency Member, or such other entity as Microsoft may designate from time to time.
b. “Authorized Training Session” means the instructor-led training class using Microsoft Instructor-Led
Courseware conducted by a Trainer at or through an Authorized Learning Center.
c. “Classroom Device” means one (1) dedicated, secure computer that an Authorized Learning Center owns
or controls that is located at an Authorized Learning Center’s training facilities that meets or exceeds the
hardware level specified for the particular Microsoft Instructor-Led Courseware.
d. “End User” means an individual who is (i) duly enrolled in and attending an Authorized Training Session
or Private Training Session, (ii) an employee of a MPN Member, or (iii) a Microsoft full-time employee.
e. “Licensed Content” means the content accompanying this agreement which may include the Microsoft
Instructor-Led Courseware or Trainer Content.
f. “Microsoft Certified Trainer” or “MCT” means an individual who is (i) engaged to teach a training session
to End Users on behalf of an Authorized Learning Center or MPN Member, and (ii) currently certified as a
Microsoft Certified Trainer under the Microsoft Certification Program.
g. “Microsoft Instructor-Led Courseware” means the Microsoft-branded instructor-led training course that
educates IT professionals and developers on Microsoft technologies. A Microsoft Instructor-Led
Courseware title may be branded as MOC, Microsoft Dynamics or Microsoft Business Group courseware.
h. “Microsoft IT Academy Program Member” means an active member of the Microsoft IT Academy
Program.
i. “Microsoft Learning Competency Member” means an active member of the Microsoft Partner Network
program in good standing that currently holds the Learning Competency status.
j. “MOC” means the “Official Microsoft Learning Product” instructor-led courseware known as Microsoft
Official Course that educates IT professionals and developers on Microsoft technologies.
k. “MPN Member” means an active Microsoft Partner Network program member in good standing.
MCT USE ONLY. STUDENT USE PROHIBITED
l. “Personal Device” means one (1) personal computer, device, workstation or other digital electronic device
that you personally own or control that meets or exceeds the hardware level specified for the particular
Microsoft Instructor-Led Courseware.
m. “Private Training Session” means the instructor-led training classes provided by MPN Members for
corporate customers to teach a predefined learning objective using Microsoft Instructor-Led Courseware.
These classes are not advertised or promoted to the general public and class attendance is restricted to
individuals employed by or contracted by the corporate customer.
n. “Trainer” means (i) an academically accredited educator engaged by a Microsoft IT Academy Program
Member to teach an Authorized Training Session, and/or (ii) a MCT.
o. “Trainer Content” means the trainer version of the Microsoft Instructor-Led Courseware and additional
supplemental content designated solely for Trainers’ use to teach a training session using the Microsoft
Instructor-Led Courseware. Trainer Content may include Microsoft PowerPoint presentations, trainer
preparation guide, train the trainer materials, Microsoft One Note packs, classroom setup guide and Pre-
release course feedback form. To clarify, Trainer Content does not include any software, virtual hard
disks or virtual machines.
2. USE RIGHTS. The Licensed Content is licensed not sold. The Licensed Content is licensed on a one copy
per user basis, such that you must acquire a license for each individual that accesses or uses the Licensed
Content.
2.1 Below are five separate sets of use rights. Only one set of rights apply to you.
2.2 Separation of Components. The Licensed Content is licensed as a single unit and you may not
separate their components and install them on different devices.
2.3 Redistribution of Licensed Content. Except as expressly provided in the use rights above, you may
not distribute any Licensed Content or any portion thereof (including any permitted modifications) to any
third parties without the express written permission of Microsoft.
2.4 Third Party Notices. The Licensed Content may include third party code tent that Microsoft, not the
third party, licenses to you under this agreement. Notices, if any, for the third party code ntent are included
for your information only.
2.5 Additional Terms. Some Licensed Content may contain components with additional terms,
conditions, and licenses regarding its use. Any non-conflicting terms in those conditions and licenses also
apply to your use of that respective component and supplements the terms described in this agreement.
a. Pre-Release Licensed Content. This Licensed Content subject matter is on the Pre-release version of
the Microsoft technology. The technology may not work the way a final version of the technology will
and we may change the technology for the final version. We also may not release a final version.
Licensed Content based on the final version of the technology may not contain the same information as
the Licensed Content based on the Pre-release version. Microsoft is under no obligation to provide you
with any further content, including any Licensed Content based on the final version of the technology.
b. Feedback. If you agree to give feedback about the Licensed Content to Microsoft, either directly or
through its third party designee, you give to Microsoft without charge, the right to use, share and
commercialize your feedback in any way and for any purpose. You also give to third parties, without
charge, any patent rights needed for their products, technologies and services to use or interface with
any specific parts of a Microsoft technology, Microsoft product, or service that includes the feedback.
You will not give feedback that is subject to a license that requires Microsoft to license its technology,
technologies, or products to third parties because we include your feedback in them. These rights
survive this agreement.
c. Pre-release Term. If you are an Microsoft IT Academy Program Member, Microsoft Learning
Competency Member, MPN Member or Trainer, you will cease using all copies of the Licensed Content on
the Pre-release technology upon (i) the date which Microsoft informs you is the end date for using the
Licensed Content on the Pre-release technology, or (ii) sixty (60) days after the commercial release of the
technology that is the subject of the Licensed Content, whichever is earliest (“Pre-release term”).
Upon expiration or termination of the Pre-release term, you will irretrievably delete and destroy all copies
of the Licensed Content in your possession or under your control.
MCT USE ONLY. STUDENT USE PROHIBITED
4. SCOPE OF LICENSE. The Licensed Content is licensed, not sold. This agreement only gives you some
rights to use the Licensed Content. Microsoft reserves all other rights. Unless applicable law gives you more
rights despite this limitation, you may use the Licensed Content only as expressly permitted in this
agreement. In doing so, you must comply with any technical limitations in the Licensed Content that only
allows you to use it in certain ways. Except as expressly permitted in this agreement, you may not:
• access or allow any individual to access the Licensed Content if they have not acquired a valid license
for the Licensed Content,
• alter, remove or obscure any copyright or other protective notices (including watermarks), branding
or identifications contained in the Licensed Content,
• modify or create a derivative work of any Licensed Content,
• publicly display, or make the Licensed Content available for others to access or use,
• copy, print, install, sell, publish, transmit, lend, adapt, reuse, link to or post, make available or
distribute the Licensed Content to any third party,
• work around any technical limitations in the Licensed Content, or
• reverse engineer, decompile, remove or otherwise thwart any protections or disassemble the
Licensed Content except and only to the extent that applicable law expressly permits, despite this
limitation.
5. RESERVATION OF RIGHTS AND OWNERSHIP. Microsoft reserves all rights not expressly granted to
you in this agreement. The Licensed Content is protected by copyright and other intellectual property laws
and treaties. Microsoft or its suppliers own the title, copyright, and other intellectual property rights in the
Licensed Content.
6. EXPORT RESTRICTIONS. The Licensed Content is subject to United States export laws and regulations.
You must comply with all domestic and international export laws and regulations that apply to the Licensed
Content. These laws include restrictions on destinations, end users and end use. For additional information,
see www.microsoft.com/exporting.
7. SUPPORT SERVICES. Because the Licensed Content is “as is”, we may not provide support services for it.
8. TERMINATION. Without prejudice to any other rights, Microsoft may terminate this agreement if you fail
to comply with the terms and conditions of this agreement. Upon termination of this agreement for any
reason, you will immediately stop all use of and delete and destroy all copies of the Licensed Content in
your possession or under your control.
9. LINKS TO THIRD PARTY SITES. You may link to third party sites through the use of the Licensed
Content. The third party sites are not under the control of Microsoft, and Microsoft is not responsible for
the contents of any third party sites, any links contained in third party sites, or any changes or updates to
third party sites. Microsoft is not responsible for webcasting or any other form of transmission received
from any third party sites. Microsoft is providing these links to third party sites to you only as a
convenience, and the inclusion of any link does not imply an endorsement by Microsoft of the third party
site.
10. ENTIRE AGREEMENT. This agreement, and any additional terms for the Trainer Content, updates and
supplements are the entire agreement for the Licensed Content, updates and supplements.
12. LEGAL EFFECT. This agreement describes certain legal rights. You may have other rights under the laws
of your country. You may also have rights with respect to the party from whom you acquired the Licensed
Content. This agreement does not change your rights under the laws of your country if the laws of your
country do not permit it to do so.
13. DISCLAIMER OF WARRANTY. THE LICENSED CONTENT IS LICENSED "AS-IS" AND "AS
AVAILABLE." YOU BEAR THE RISK OF USING IT. MICROSOFT AND ITS RESPECTIVE
AFFILIATES GIVES NO EXPRESS WARRANTIES, GUARANTEES, OR CONDITIONS. YOU MAY
HAVE ADDITIONAL CONSUMER RIGHTS UNDER YOUR LOCAL LAWS WHICH THIS AGREEMENT
CANNOT CHANGE. TO THE EXTENT PERMITTED UNDER YOUR LOCAL LAWS, MICROSOFT AND
ITS RESPECTIVE AFFILIATES EXCLUDES ANY IMPLIED WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.
14. LIMITATION ON AND EXCLUSION OF REMEDIES AND DAMAGES. YOU CAN RECOVER FROM
MICROSOFT, ITS RESPECTIVE AFFILIATES AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP
TO US$5.00. YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL,
LOST PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES.
It also applies even if Microsoft knew or should have known about the possibility of the damages. The
above limitation or exclusion may not apply to you because your country may not allow the exclusion or
limitation of incidental, consequential or other damages.
Please note: As this Licensed Content is distributed in Quebec, Canada, some of the clauses in this
agreement are provided below in French.
Remarque : Ce le contenu sous licence étant distribué au Québec, Canada, certaines des clauses
dans ce contrat sont fournies ci-dessous en français.
EXONÉRATION DE GARANTIE. Le contenu sous licence visé par une licence est offert « tel quel ». Toute
utilisation de ce contenu sous licence est à votre seule risque et péril. Microsoft n’accorde aucune autre garantie
expresse. Vous pouvez bénéficier de droits additionnels en vertu du droit local sur la protection dues
consommateurs, que ce contrat ne peut modifier. La ou elles sont permises par le droit locale, les garanties
implicites de qualité marchande, d’adéquation à un usage particulier et d’absence de contrefaçon sont exclues.
EFFET JURIDIQUE. Le présent contrat décrit certains droits juridiques. Vous pourriez avoir d’autres droits
prévus par les lois de votre pays. Le présent contrat ne modifie pas les droits que vous confèrent les lois de votre
pays si celles-ci ne le permettent pas.
Acknowledgements
Microsoft Learning wants to acknowledge and thank the following for their contribution toward
developing this title. Their effort at various stages in the development has ensured that you have a good
classroom experience.
Ishai is the Vice President of SELA Group. He has over 20 years of experience as a professional trainer and
consultant on computer software and electronics.
Baruch is a senior project manager at SELA Group. He has extensive experience in producing Microsoft
Official Courses and managing software development projects. Baruch is also a lecturer at SELA College
delivering a variety of development courses.
Sasha Goldshtein is the CTO at Sela Group, a Microsoft C# MVP and Regional Director, a Pluralsight and
O'Reilly author, and an international consultant and trainer. Sasha is the author of "Introducing Windows
7 for Developers" (Microsoft Press, 2009) and "Pro .NET Performance" (Apress, 2012). His is also a prolific
blogger and open-source contributor, and author of numerous training courses including .NET
Debugging, .NET Performance, Android Application Development, and Modern C++. His consulting work
revolves mainly around distributed architecture, production debugging and performance diagnostics, and
mobile application development.
Avi Avni is a consultant and an instructor at SELA Group, with 7+ years of industry experience. Avi
specializes in design and development of large-scale applications and diagnosing memory and CPU
performance issues. Avi has 2+ years of experience as a team leader. Avi is a contributor to several open-
source projects such as F# Compiler, CoreCLR, Roslyn, and ClrMD.
Viacheslav is a senior developer and lecturer at SELA Group. Viacheslav has six years of experience in
developing and maintaining large-scale solutions in a variety of technologies. Viacheslav is a proficient
problem solver and content developer. Viacheslav’s main technology interests vary between web and
desktop development.
Roi is a senior developer and lecturer at SELA Group. Roi has over five years of experience in developing
desktop, web, and mobile applications. Roi is a full stack developer, specializing in both front-end and
back-end development. Roi delivers many courses in the IT industry.
MCT USE ONLY. STUDENT USE PROHIBITED
xiii Developing Microsoft Azure™ and Web Services
Shalev is a senior developer and lecturer at SELA Group. Shalev has over five years of experience in
software development and a proven track record in development of large-scale hybrid applications.
Shalev’s main interest is in back-end solution development. Shalev delivers many training sessions in the
industry. His other fields of interest include Azure development, web development, and mobile
development.
Shelly Aharoni, Naor Michelsohn, Amith Vincent, Kavitha Ravipati, Vinay Antony, Dhananjaya Punugoti,
and the Enfec Team.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services xiv
Contents
Module 1: Overview of Service and Cloud Technologies
Module Overview 1-1
Lab B: Host an ASP.NET Core Web API in an Azure Web App 5-16
Lesson 3: Packaging services in containers 5-17
Course Description
This course will provide you with the knowledge and skills to design and develop services that access local
and remote data from various sources. You will learn how to develop and deploy services to hybrid
environments, including on-premises servers and Microsoft Azure.
This course will help you prepare for the 70-847 exam.
Audience
This course is intended for both novice and experienced Microsoft .NET developers who have a minimum
of six months programming experience and want to learn how to develop services and deploy them to
hybrid environments.
Student Prerequisites
Before attending this course, students must have at least six months of programming experience. In
addition, the students must meet the following prerequisites.
• Experience with Microsoft Visual Studio 2017 or later.
Course Objectives
After completing this course, students will be able to:
• Use ASP.NET Core Web API to create HTTP-based services and consume them from .NET and non-
.NET clients.
• Extend ASP.NET Core Web API services by using middleware, action filters, and media type
formatters.
• Host services on on-premises servers and various Azure environments such as Azure Web Apps, Azure
Container Instance, and Azure Functions.
• Choose a data storage solution, and cache, distribute, and synchronize data.
• Monitor and log services, both on-premises and in Azure.
• Implement authentication and authorization with Azure Active Directory (Azure AD).
MCT USE ONLY. STUDENT USE PROHIBITED
xviii About This Course
Course Outline
The course outline is as follows:
This module explains how to create Entity Framework Core models and use them to query and
manipulate data.
Module 3. Creating and Consuming ASP.NET Core Web APIs
This module explains how to create and consume HTTP-based services by using ASP.NET Core Web API.
This module describes how to host services on various Azure environments such as Azure Web Apps,
Azure Container Instance, and Azure Functions.
This module explains how to monitor and log services, both on-premises and in Azure.
This module describes claim-based identity concepts and standards, and how to implement
authentication and authorization by using Azure AD to secure an ASP.NET Core Web API service.
Module 10. Scaling Services
This module explains how to create scalable services and applications and scale them automatically by
using Web Apps load balancers, Azure Application Gateway, and Azure Traffic Manager.
MCT USE ONLY. STUDENT USE PROHIBITED
About This Course ixx
Course Materials
The following materials are included with your kit:
• Course Handbook is a succinct classroom learning guide that provides the critical technical
information in a crisp, tightly focused format, which is essential for an effective in-class learning
experience
You may be accessing either a printed course handbook or digital courseware material via the Skillpipe
reader by Arvato. Your Microsoft Certified Trainer will provide specific details, but both printed and digital
versions contain the following:
o Lessons guide you through the learning objectives and provide the key points that are critical to
the success of the in-class learning experience.
o Labs provide a real-world, hands-on platform for you to apply the knowledge and skills learned
in the module.
o Module Reviews and Takeaways sections provide on-the-job reference material to boost
knowledge and skills retention.
To run the labs and demos in this course, use the code and instruction files that are available on GitHub:
• Instruction files: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/tree/master/Instructions
Make sure to clone the repository to your local machine. Cloning the repository before the course ensures
that you have all the required files without depending on the connectivity in the classroom.
• Course evaluation. At the end of the course, you will have the opportunity to complete an online
evaluation to provide feedback on the course, training facility, and instructor.
o To provide additional comments or feedback, or to report a problem with course resources, visit
the Training Support site at https://trainingsupport.microsoft.com/en-us. To inquire about the
Microsoft Certification Program, send an e-mail to certify@microsoft.com.
MCT USE ONLY. STUDENT USE PROHIBITED
About This Course
xx
MCT USE ONLY. STUDENT USE PROHIBITED
1-1
Module 1
Overview of Service and Cloud Technologies
Contents:
Module Overview 1-1
Module Overview
This module provides an overview of service and cloud technologies using the Microsoft .NET Core and
Microsoft Azure. The first lesson, “Key Components of Distributed Applications,” discusses characteristics
that are common to distributed systems, regardless of the technologies they use. Lesson 2, “Data and Data
Access Technologies” describes how data is used in distributed applications. Lesson 3, “Service
Technologies,” discusses two of the most common protocols in a distributed system and the .NET Core
technologies used to develop services based on those protocols. Lesson 4, “Cloud Computing,” describes
cloud computing and how it is implemented in Azure.
Note: The Azure portal user interface (UI) and Azure dialog boxes in Visual Studio 2017 are
updated frequently when new Azure components and SDKs for .NET are released. Therefore, it is
possible that some differences will exist between screen shots and steps shown in this module,
and the actual UI you encounter in the Azure portal and Visual Studio 2017.
Objectives
After completing this module, you will be able to:
Lesson 1
Key Components of Distributed Applications
Users today expect applications to present and process information from varied data sources, which might
be geographically distributed. Modern applications must also support different platforms such as mobile
and desktop, in addition to providing up-to-date information and an appealing UI.
Designing such applications is not a trivial task and involves collaboration and integration between several
groups of components.
This lesson describes the key components and architecture of modern distributed applications.
Lesson Objectives
After completing this lesson, you will be able to:
• Describe the basic characteristics of distributed applications.
Modern applications can run multiple instances on a variety of different platforms, yet they are expected
to have access to the same data and always stay in sync.
Data is distributed between data centers, private computers, and mobile devices. Data should be secured
and private, but at the same time available to its owners and legitimate customers. Today, both data and
the number of users has increased exponentially. Applications must provide services to access data and
maintain high-quality standards in terms of availability and performance.
The only way to achieve availability and performance is by collaboration and distribution of load. An
application can achieve its performance requirements by distributing the computing load across multiple
servers. By using many web servers that are geographically distributed, you also increase the high
availability of your applications. Applications also consume data to provide a rich set of functionalities
from a variety of data sources and share their data. Finally, applications replicate cache and centralize data
to provide the best user experience.
It is simply impossible to provide a modern, high-scale application within the borders of a traditional
single computer. Today, data and computing distribution is a necessity.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 1-3
• Scalability
• Availability
• Latency
• Reliability
• Security and privacy
Scalability
Distributed systems provide value by using the collaboration of a group of services and clients that are
geographically distributed. Each service must serve many requests originating from different clients. A
scalable service can provide service to a growing number of clients. Scalability is measured by the ratio of
the growth in the number of customers and the growth in the infrastructure required. You can achieve
scalability by using an appropriate design, such as designing stateless services so you can run them on
multiple computers and integrating distributed cache solutions for services that need to share their state
between computers.
Availability
Today’s systems serve a global audience, located around the world in different time zones. Services must
be available 100 percent of the time and be resilient to connectivity or performance issues. You can
achieve high-availability in a distributed environment by using design guidelines such as fail-over services
and appropriate decoupling between services.
Latency
Latency is the delay introduced by a system when responding to a single request. Users expect
applications to present valuable information without any unnecessary delays. The information must always
be available, the application must be responsive, and the user experience must be smooth. To provide a
seamless user experience, services must have a short response time. If the service introduces a long delay,
the experience is not considered to be smooth. When designing a system to have low latency, you should
consider concepts such as caching data, parallelizing tasks, and reducing the size of payloads for both
requests and responses.
Reliability
Information is a valuable asset. Clients expect distributed applications to store their data reliably and
make sure that it is never lost or damaged. Keeping data consistent might not be trivial in a distributed
environment where multiple instances might handle the same piece of data concurrently. Data must be
replicated and geographically distributed to handle the risk of hardware failure of any kind.
The fact that the system is distributed means that data will be distributed as well. Yet the system must
ensure that only legitimate stakeholders get access to it at any time. Often distributed systems have no
boundaries and are accessible to anyone through the internet. This can include potential attackers who
wish to harm the system and disturb its normal behavior. Proper security design that incorporates
concepts such as communication encryption, authentication, and authorization, can reduce the risk of
information disclosure, denial of service, and data theft.
• Data layer
• Execution layer
• Service layer
Data Layer
The data layer is responsible for storing and accessing data. The data layer is responsible for storing,
querying, updating, or deleting the data as required while maintaining a reasonable performance. This
can be a complicated task when you are dealing with a large set of data, distributed across several data
sources.
The data manipulation policy depends on the data type and its properties. Data can be replicated,
distributed, and handled according to its characteristics. For example, client contacts can be replicated
across the data center because they change slowly. However, information about stocks must be always
accurate and therefore must be read from a single source.
Execution Layer
The execution layer contains the business logic and is responsible for carrying out the use-case scenarios
of the application. In other words, the execution layer implements the logic of the application. The
business logic uses the data layer to read and store data, and the UI layer to interact with the client. The
execution layer contains all the algorithms and logic of the application and is considered the brain of the
application.
Service Layer
The service layer exposes some of the capabilities of the application to the world as services. Other
applications might consume these services and use them as a data source or as a remote execution
engine.
The service layer acts as the interface for other applications, in contrast to the user interface layer, which
targets humans. The service layer drives collaboration of applications and enables distribution of
computing load and data. It is responsible for defining a contract that consumers must maintain to use
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 1-5
the service. It enforces security policies, validates incoming requests, and maintains the application
resources.
Lesson 2
Data and Data Access Technologies
Our identities, financial status, commercial activities, professional, social relations and more, are persisted
as data, located across various data sources.
Applications access data, process it to provide value, and finally produce some more data for future use.
In this lesson, you will be introduced to various database technologies, along with .NET Core data access
technologies.
Lesson Objectives
After completing this lesson, you will be able to:
Storage Technologies
Data can be persisted in a variety of different
formats and in a wide range of infrastructures.
Each infrastructure and format is designed for
different scenarios and data types. Some storage
infrastructures are used to store a huge amount of
data and others have limited capacity. Some
storage infrastructures can execute complex
queries and others cannot. Some can access data
very quickly while others introduce long delays.
Relational Databases
SQL Server databases and the Microsoft Azure SQL Database are the traditional large-scale data sources.
They are designed to store relational data and can execute complex queries and user-defined functions.
Queries are written declaratively in languages such as T-SQL and can execute Create, Read, Update, and
Delete (CRUD) operations.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 1-7
File System
A file system is used to store and retrieve unstructured data on disks. The basic unit of data is a file. Files
are organized in a tree of directories that have a volume as its root. Operating systems such as Windows
and Linux use file systems as their basic storage system.
Distributed Caches
Data access from relational databases is considered a long operation. To reduce latency, some data can be
cached in-memory, yet the size of such a cache is limited. Distributed in-memory cache solves the size
limitation by using an arrangement of networked computers, which store in-memory data as key-value
pairs and provides an experience that mimics a single cache to the end user. Distributed caches will be
discussed in Module 12, "Scaling Services" of Course 20487.
NoSQL Databases
NoSQL databases are an umbrella for many types of data stores, Each store data in a non-relational
fashion. NoSQL databases are often used to store large amounts of data. These data stores are schema-
free, but data can be organized in a variety of different models such as document database, key-value
store, columnar database, or graph database.
Azure Cosmos DB
Azure Cosmos DB is Microsoft's globally distributed database. It offers great scalability and availability
capabilities. It also supports different models such as:
Cloud-Storage
Infrastructures such as Microsoft Azure Storage enable cloud and on-premises applications to store their
data, which can be structured or unstructured, on a high-scale and persistent data store. Storage exposes
an interoperable API based on HTTP that can be used by any application running on any platform.
Microsoft Azure Table storage can be referred to as a key-value no-SQL database in the cloud, and
Microsoft Azure Blob storage is like a huge file system in the cloud. Storage will be discussed in Module 7,
"Implementing data storage in Microsoft Azure" of Course 20487.
In-Memory Stores
In-memory stores are the fastest data store but are limited in size, not persistent, and hard to use in a
multi-server environment. In-Memory stores are used to store temporary data, local volatile data, or
replication of data that was retrieved from an external data source.
MCT USE ONLY. STUDENT USE PROHIBITED
1-8 Overview of Service and Cloud Technologies
.NET Technologies
Applications written in .NET Core usually access
data. .NET Core provides a variety of data access
technologies:
• System.IO contains all the infrastructure
required to access data persisted on a file
system. FileStream provides the basic
read\write operations and classes such as
FileInfo or DirectoryInfo provide the
required metadata.
• ASP.NET Core introduces a powerful in-memory cache that can be used by any .NET Core application.
• Distributed cache solutions, such as Microsoft Redis Cache, are an in-memory store for .NET Core
application, which negates the memory size limitation of in-memory caches by distributing cache
objects over several servers. Using distributed cache provides scalability and enhances the durability
of cache items by saving copies of the cache items on participating nodes and by avoiding the need
to recreate the cache items on server temporary failure. Distributed cache requires cached objects to
be serializable for them to be transported to other nodes in the cache cluster.
HTTP-Based APIs
A vast variety of technologies are used to create client applications that consume data from services. This
illustrates the importance of exposing data in standard and widespread protocols such as HTTP, which
provides an easy, standard, resource-based access to data.
Storage provides both HTTP and Managed APIs to access large unstructured data objects, such as videos
and images. Azure Table Service provides a NoSQL, key-value store for storing small objects, up to 1
megabyte (MB) per entity. Objects can also be stored by using Blob Service as binary blocks of data with a
size limit of 200 gigabytes (GB) per object.
LINQ
LINQ is a C# feature used for querying in a declarative fashion. LINQ technology can be used to support
any kind of data source and provides a standard, consistent way to integrate data from different sources.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 1-9
Question: Why is it important for applications to support HTTP for data access?
MCT USE ONLY. STUDENT USE PROHIBITED
1-10 Overview of Service and Cloud Technologies
Lesson 3
Service Technologies
Services constitute a layer in application architecture, which exposes business logic capabilities to other
application components to improve component modularity and reusability.
Services are the core of distributed applications providing access to data and making it possible for users
to interact with other applications.
Services provide distributed applications with the ability to scale and meet the growing demands for
better performance, robustness, and interoperability for various consumers, whether it is a web
application, a mobile application, or even another service.
Using service as a layer for the application business logic also contributes to the maintainability and
testability of the application, therefore improving the application's quality. Separation of layers helps to
ensure the existence of Single Responsibility Principle (SRP), making it possible to test each layer as an
independent portion.
In this lesson, you will learn about services and how services are integrated into application architecture,
services technologies, and .NET services technologies.
Lesson Objectives
After completing this lesson, you will be able to:
• Describe the HTTP-based services.
HTTP-Based Services
Web service is a method of data transfer between
software components based on web technologies.
HTTP-Based Services
HTTP is an application-layer protocol, which defines a set of characteristics for establishing request-
response communication between two networked nodes. HTTP characteristics consist of methods, which
are usually referred to as Verbs that can be performed on a remote computer, security extensions (HTTPS),
authentication, status codes conventions, and more.
HTTP-based web services are mostly used to manage resources that are a part of the HTTP paradigm,
custom structured textual resources, images, and more. Managing resources by using HTTP web services is
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 1-11
natural and is based on Uniform Resource Identifiers (URI) for resource identification and verbs for
performing operations on the selected resource.
You can use ASP.NET Core to create a rich, testable, and customizable environment for creating HTTP-
based services.
HTTP-based services are covered in Module 3, "Creating and Consuming ASP.NET Core Web APIs", Lesson
1, "HTTP Services" in Course 20487.
Definition of Micro-Services
The traditional monolith architecture has some
problems when supporting the agility needed in
modern software development, such as how they
tightly couple to the same development tool
across all development teams, difficulty in
deploying the software because all parts must be
tested before deployment, and difficulty in
implementing scalability because different parts
scale in different ways. Micro-services is a popular
modern solution to these problems.
• Fault-tolerant services.
• Crash of one service does not affect other services.
• Easy to scale.
Deploying services is covered in Module 6 "Deploying and managing services" in Course 20487.
ASP.NET Core
With ASP.NET Core, you can create HTTP services utilizing HTTP verbs and URIs, providing the support for
fully interoperable services that can be consumed by many platforms due to wide support across different
environments for HTTP.
Based on HTTP characteristics, the ASP.NET Core uses HTTP headers to help consumers determine the
format of data they expect to get back from the service. Single service implementation can generate
responses in the JSON format, a human-readable text-based standard, XML and other encoding formats,
without special handling on the service side.
The ASP.NET Core is covered in depth in Module 3, " Creating and Consuming ASP.NET Core Web APIs",
and Module 4, "Extending ASP.NET Core HTTP Services" of Course 20487.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 1-13
Lesson 4
Cloud Computing
Cloud computing is revolutionizing the way you develop services and applications. The on-demand model
of cloud computing provides new ways to scale and provide better availability of services.
The continuous growth of data, platforms, and users require a more robust and capacity-unlimited
platform to take on the expected load.
In this lesson, you will learn about cloud computing and its benefits, some architectural considerations for
setting up cloud computing, and the cloud computing products from Microsoft that are based on Azure.
Lesson Objectives
After completing this lesson, you will be able to:
• Explain what cloud computing is.
Cloud computing can handle these issues by using an on-demand approach for computing. In cloud
computing, you lease computer resources from a cloud vendor, based on your current needs.
Typically, cloud services consist of a group of servers and storage resources scattered in different physical
locations. Cloud services share resources to provide hosted application high-availability, flexibility, and
maximum utilization of hardware.
MCT USE ONLY. STUDENT USE PROHIBITED
1-14 Overview of Service and Cloud Technologies
The following illustration demonstrates the utilization of resources in hosting a service or an application
on a local data center compared to the cloud.
While cloud provisioning maintains a stable provisioning slightly above the application usage, as shown in
the preceding graph, on-premises provisioning fails to keep up with the application usage needs in two
scenarios. When the application grows rapidly, the static on-premises provisioning causes under-
provisioning. When the application usage drops, on-premises provisioning cannot scale down and causes
over-provisioning.
Cloud computing provides unlimited scaling in case of unpredicted load and enhances high-availability
and performance by taking advantage of the large capacity of available bandwidth, storage, and
computing resources.
Hosting application and services on the cloud also improves utilization of resources by using an elastic
approach. An elastic approach is the scaling out of resources to meet the growing demand when needed
and scaling down when the demand is down again. This improves flexibility and reduces operational costs.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 1-15
The following illustration shows some of the growth patterns that are common in modern applications
and can benefit from using cloud computing
Cloud computing vendors, such as Azure, also provide a wide range of features for hosted services and
applications, for data storage, caching, and more. Azure features will be covered in detail in later modules
and lessons.
• Platform as a Service (PaaS). With PaaS, the cloud platform provides a ready-to-use infrastructure,
which includes an operating system, storage, databases, auto-configured load-balancer, backup,
replication and more. The software vendor can focus on creating the required database schema and
data, and deploy the application. The platform will take care of the rest, providing an on-demand
application-hosting environment that can be cloned and scales automatically.
• Function as a Service (FaaS). With FaaS, the cloud platform provides a ready-to-use platform to
develop, run, test, and deploy, without the need for managing infrastructure such as virtual machines
(VM). This technique is called "serverless" architecture and it is typically used in micro-services
applications. In this strategy, the platform handles scalability and availability.
• Software as a Service (SaaS). With SaaS, software vendors can provide their users with a ready to
use on-demand software that benefits the inherent capabilities of a cloud platform. SaaS provides
MCT USE ONLY. STUDENT USE PROHIBITED
1-16 Overview of Service and Cloud Technologies
business flexibility by enhancing the cloud platform features – such as scalability, high availability,
self-managed, backup and more.
Introduction to Azure
Azure is the cloud computing platform offered by
Microsoft. Azure consists of pairs of data centers
located in some key areas in North America, South
America, Europe, Australia, and Asia, including
China.
As a complete cloud computing solution, Azure provides an on-demand, scalable, self-service computing
and storage resource platform for hosting services and applications from a wide variety of technologies,
such as .NET Core applications, Java applications, Python, PHP, and others, using SQL databases, MySQL,
hosted on Windows or Linux operating systems.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 1-17
Azure supports a wide variety of platforms and technologies making it possible to host whole solutions
and not only standalone services.
Azure also offers a set of building blocks services for managing identities, communication, and media.
Azure also includes inherent features for scalability, replication, and backup, and advanced storage types,
which will be introduced in the following modules.
Web Apps
Web Apps are designed to host applications and
services. They are exposed to the internet and
their scalability and availability make them a
prime choice to host the front-end of the
application. Azure also manages all the
infrastructure such as VM, operating system and
the application stack in Web Apps. With Web
Apps, it is possible to host applications written in different platforms including .NET, PHP, Node.js and
more. Web Apps is covered in depth in Module 5, "Hosting Services." in Lesson 2, "Hosting Services in
Azure Web Apps".
Function Apps
Function Apps is designed to run code. Each function can be run by triggers such as an HTTP request, a
time trigger, a post message to a queue, an add blob to a blob storage and more. Function Apps simplify
development and deployment. The developer focuses only on writing the code and then publishing it to
Azure Function. Azure will run it when it is triggered. Since Azure handles scalability and availability, this
makes it a prime choice to build micro-services and serverless applications. Function Apps is covered in
depth in Module 5, "Hosting Services." in Lesson 4, "Implementing Serverless Services".
Container Services
Container Services is designed to host container-based applications. A container is a stand-alone package
of software that runs as an isolated process. The developer needs to package the software and publish it
to the Container Service and Azure will run it. Since Azure handles scalability and availability, this makes it
a prime choice to build micro-services application. Container Services is covered in depth in Module 5,
"Hosting Services." in Lesson 3, "Packaging Services in Containers".
MCT USE ONLY. STUDENT USE PROHIBITED
1-18 Overview of Service and Cloud Technologies
Storage
Due to the unpredictable nature of cloud services,
data cannot be persisted reliably on the virtual
machines. Storage is a cloud-based large-scale
data store for persisting data in the cloud. Storage
provides an HTTP API for storing data objects.
Data objects can be stored into three available
types of storage: Blob storage, Table storage, and
Microsoft Azure Queue storage.
• Table storage. This type of storage is a semi-structured collection of objects that can have fields but
cannot have relations between objects. The fields are not bound to a schema structure, and different
objects can have different fields within the same collection. Table storage also provides a queryable
API access to find objects.
• Microsoft Azure Files. This type of storage is file-based storage. It supports Network File System (NFS)
and HTTP-based access without mounting and can be accessed from multiple clients at the same
time.
Storage is covered in depth in Module 7, "Implementing data storage in Microsoft Azure" in Course
20487.
Azure AD has single sign-on (SSO) access to Office 365 and many third-party applications, such as
Dropbox and Salesforce.
If you have Active Directory installed on-premises, you can easily integrate it with Azure AD to provide
better connectivity while not on-site, with seamless integration with current identity management policies.
Azure AD is covered in depth in Module 9, " Securing services on-premises and in Microsoft Azure" in
Course 20487.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 1-19
Azure Redis Cache simplifies migrating for applications that use on-premises in-memory or distrusted
cache solutions. You can also use Azure Redis Cache to replace the session state and output cache
provider of ASP.NET.
Content Delivery Network is covered in depth in Module 10, "Scaling Services" in Course 20487.
SQL Database
SQL Database is a cloud-based relational database as a service, based on Microsoft SQL Server
technologies. SQL Database is fully scalable and provides high-availability access, support for SQL
Reporting, and enables data replication between cloud and on-premises databases.
Azure IaaS
Hosting applications and services on the Azure
platform is easy and the power and productivity
of the Azure PaaS infrastructure enable you to
meet most of the requirements of common
services. Sometimes, you might need more fine-
grained control to host applications and services
on different operating systems such as Linux, or to
host multi-server environments.
Azure IaaS provides a platform for hosting a
custom virtual machine on the cloud you with
better control over the hosting servers by. This
makes it possible to manage every aspect of the
desired solution, starting from the operating system, virtual network configuration, complicated software
pre-installation requirements, and local disk persistency.
MCT USE ONLY. STUDENT USE PROHIBITED
1-20 Overview of Service and Cloud Technologies
Azure provides a set of operating system images to choose from while creating a virtual machine, which
can include Linux distributions and partners’ solutions. You can also create a custom virtual machine on-
premises, upload, and then deploy it to the cloud. Azure provides various ways to host all kind of software
and services.
You can migrate currently deployed applications by uploading a whole solution consisting of multiple
machines to the cloud for seamless continuation. Downloading virtual machines from Azure to be hosted
on-premises is supported as well.
Microsoft Azure Virtual Machines uses Virtual Hard Disks (VHDs) that are stored on Storage. By storing the
VHDs in Storage, you get durability, because the disks are replicated to three copies and are saved on two
different data centers.
Azure provides an API for deployment and management capabilities, both in PowerShell cmdlets (scripts),
and programmatically using HTTP API, making it possible to create custom management tools integrated
into any software solution.
Question: When would you choose IaaS over PaaS?
Demonstration Steps
You will find the steps in the “Demonstration: Exploring the Microsoft Azure Portal“ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD01_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 1-21
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated Time: 30 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD01_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD01_LAK.md.
Best Practices
• Plan your application architecture to be appropriate with the technical requirements while
understanding the limitations of distributed architecture.
• Choose the database technology that will let you scale according to your application usage –
combine different approaches when appropriate (Relational DB, NoSQL)
• Think of your consumers while choosing service technology. Use HTTP services for high-compatibility
and resource-based communications.
• Describe your software deployment and configuration with details before choosing Cloud Computing
strategy (IaaS, PaaS)
Review Question
Question: What are the key benefits of micro-services architecture?
MCT USE ONLY. STUDENT USE PROHIBITED
2-1
Module 2
Querying and Manipulating Data Using Entity Framework
Core
Contents:
Module Overview 2-1
Module Overview
Typically, all applications store some data in a database. Some examples of data include configuration
settings, application data, user information, documents, and many others.
The .NET Framework provides a set of tools that helps you access and manipulate data that is stored in a
database. In this module, you will learn about the Entity Framework Core data model, and about how to
create, read, update, and delete data. Entity Framework Core is a rich object-relational mapper, which
provides a convenient and powerful application programming interface (API) to manipulate data.
This module focuses on the Code First approach with Entity Framework Core.
Objectives
After completing this module, you will be able to:
• Describe basic objects in ADO.NET and explain how asynchronous operations work.
• Create an Entity Framework Core data model.
Lesson 1
ADO.NET Overview
ADO.NET is the original low-level data access API in the .NET Framework. Although this module does not
focus on ADO.NET, understanding basic objects and operations from the ADO.NET library is essential for
using higher-level approaches, such as Entity Framework Core.
This lesson describes fundamental ADO.NET operations and its asynchronous support.
Lesson Objectives
After completing this lesson, you will be able to:
To connect to other databases, you can often find third-party data providers online, or you can
implement your own data provider.
The rest of this topic focuses on fundamental ADO.NET concepts and classes. Each data provider has its
own classes, which implement a set of common interfaces.
Connection
Use the ADO.NET connection object to connect to your database. The type of ADO.NET connection object
that implements the IDbConnection interface is SqlConnection.
A connection object is responsible for connecting to the database and initiating additional operations,
such as executing commands or managing transactions. Typically, you create a connection object with a
connection string, which is a locator for your database and may contain connection-related settings, such
as authentication credentials and timeout settings.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-3
Command
Use the ADO.NET command object to send commands to the database. Commands can either return
data, such as the result of a select query or a stored procedure, or have no data returned, such as when
you use an insert or delete statement, or a Data Definition Language (DDL) query. The type of ADO.NET
command object that implements the IDbCommand interface is SqlCommand.
A command object can represent a single command or a set of commands. Query commands return a set
of results, as a DataReader object or a DataSet object, or a single value, usually the result of an
aggregated action, such as a row count, or calculation of an average.
DataReader
Use the ADO.NET data reader to dynamically iterate a result set obtained from the database. If you use a
data reader to access data, you must maintain a live connection while you read from the database.
Additionally, data readers can only move forward while iterating the data. This data-access strategy is also
referred to as the connected architecture. The types of ADO.NET data reader object that implements the
IDataReader interface is SqlDataReader.
The following code example demonstrates how to query a database with a data reader.
if (reader.HasRows)
{
while (reader.Read())
{
Console.WriteLine("{0}\t{1}",
reader.GetInt32(0),
reader.GetString(1));
}
}
else
{
Console.WriteLine("No data found.");
}
reader.Close();
}
When using a data reader, you can access only one database record at a time, as shown in the preceding
example. If you need multiple records at once, it is your responsibility to store them as you move along to
the next record. Although this seems like a major inconvenience, data readers are very efficient in terms of
memory utilization, because they do not require the entire result set to be fetched into memory.
DataAdapter
Use the ADO.NET data adapter to load a result set obtained from a database into the memory. After
loading the entire result set and caching it in the memory, you can access any of its rows, unlike the data
reader, which only provides forward iteration. You should use this data-access strategy, referred to as the
disconnected architecture, when you do not want to maintain a live connection to the database while
processing the data.
MCT USE ONLY. STUDENT USE PROHIBITED
2-4 Querying and Manipulating Data Using Entity Framework Core
Data adapters store the results in a tabular format. You can also change the data after it is loaded and use
the data adapter to apply the changes back to the database. The type of ADO.NET data adapter object
that implements the IDataAdapter interface is SqlDataAdapter.
Although data adapters are convenient to use (especially in conjunction with the DataSet class, which is
explained in the next section), they impose a larger overhead than data readers because the entire result
set must be fetched into memory before you can perform any operations.
DataSet
The DataSet class is one of the most frequently used objects in ADO.NET. You use it to retrieve tabular
data from a database. Although you can fill a DataSet object manually with data, you typically load it by
using the DataAdapter class.
The following code example demonstrates how to load data to a DataSet object by using a data adapter.
You can use DataSet objects to hold information from more than one table at one time and maintain
relationships between tables inside a DataSet object.
Question: Why would you prefer using data readers to data adapters, and vice versa?
To execute a command asynchronously, you use the ExecuteXXAsync methods. For example, the
ExecuteReaderAsync is the asynchronous version of the ExecuteReader method. The asynchronous
methods return a Task<T> object, where the generic type parameter T is the type returned by the
corresponding synchronous method. For example, the ExecuteReaderAsync method returns a
Task<DbDataReader> object, whereas the corresponding synchronous method, ExecuteReader, returns
a DbDataReader object.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-5
In addition to the ExecuteXXAsync methods, you can also use the DbConnection.OpenAsync method
to open a database connection asynchronously. You can also use the DbDataReader.ReadAsync method,
as shown in the preceding example, to advance the reader asynchronously to the next row.
Note: The code in the preceding example uses the await keyword introduced in C# 5 to
schedule a continuation when the operation completes. You can also use the
Task.ContinueWith method to provide a delegate as the continuation of the task.
Lesson 2
Creating an Entity Data Model
This module describes how to create an Entity Framework Core model. You will learn about the Code First
approach for accessing data with Entity Framework Core.
Lesson Objectives
After completing this lesson, you will be able to:
• Map classes to tables by using the Entity Framework Core Fluent API.
Code-First
In this approach, the domain model is simply a set
of classes with properties that you provide.
You can use the code-first approach both with new databases, and with existing ones. If you do not have
a database, the default behavior of code-first will be to create the database for you the first time you run
your application. If your database already exists, Entity Framework Core will connect to it and use the
defined mappings between your model classes and the existing database tables.
• Handles the database generation for Entity Framework Core Code First.
• Provides basic create, read, update, and delete (CRUD) operations, and simplifies the code that you
must write to execute these operations.
• Handles the opening and closing of database connections.
Note: SQL Express is the free, lightweight version of SQL Server that can be installed on
development machines and ships with Visual Studio. LocalDb is an extension of SQL Express that
offers an easier way to create multiple database instances by using SQL Express. LocalDb ships
with Visual Studio 2017.
You can use a different database (that is not SQL Express or LocalDb) by providing a connection string in
your application configuration file (app.config or web.config). If you pass the name of that connection
string to the DbContext class constructor, it will use the connection string instead of the default database
engine.
The following code demonstrates how to put a connection string in your application configuration file,
and how to use it when creating an instance of the DbContext class.
C#
DbContext context = new DbContext("StudentsDB");
When you create an instance of the StudentsContext class depicted in the preceding code example,
Entity Framework Core will connect to the database and map the Students and Courses properties
according to the mapping information provided by the Student and Course classes.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-9
Note: If you do not pass a database name or connection string name to the DbContext
class constructor, it will use the fully-qualified name of your custom DbContext-derived class as
the database name. For example, if the StudentsContext class depicted in the preceding code
example were in the StudentsManagement namespace, the database name would be
StudentsManagement.StudentsContext.
After you initialize the DbContext object, you can use it to perform CRUD operations on the database by
using the domain model classes you authored. You will learn how to perform CRUD operations in Lesson
3, "Querying Data", and Lesson 4, "Manipulating Data", and learn how to map domain classes to database
tables later in this lesson.
The following example illustrates how to query the database by using the DbContext class, retrieve a set
of objects, manipulate them, and save the results back to the database.
In the preceding code example, the context.Students property returns an instance of the DbSet<T>
generic class. The DbSet<T> generic class represents a set of entities that you can use to perform CRUD
operations. You can think of it as the object representation of a database table. This class provides the
Find method, which can locate an object based on the database primary key. The example concludes by
calling the SaveChanges method of the DbContext class, which propagates the changes to the database.
Note: It is very important to keep the number of concurrent DbContext objects in your
application low. Each object can open a connection to the database and keep it open for some
time. Too many open connections can cause performance issues, both in your application and
your database. When declaring an instance of the DbContext object, use a using statement. This
will ensure that the database connection is closed and that any in-memory caches for objects you
recently queried are purged from memory.
Change Tracking
When you query the database and retrieve objects by using Entity Framework Core, the DbContext class
can track changes you make to these objects to facilitate saving them back into the database easily. The
Entity Framework Core change tracking system supports two modes of operation:
• Active change tracking. Every property informs the context if it was changed.
• Passive change tracking. The context attempts to detect changes before it determines which
property to save.
MCT USE ONLY. STUDENT USE PROHIBITED
2-10 Querying and Manipulating Data Using Entity Framework Core
When you call the SaveChanges method of the DbContext class, the context checks if active change
tracking is enabled. If only passive change tracking is available, the DbContext object calls the
DetectChanges method. This method enumerates all entities retrieved by the context and compares
every property of every entity to the original value it had when it was retrieved. Any changed properties
are updated in the database.
To support active change tracking, you should mark all your properties on your domain classes (such as
the Student class in the preceding code example) with the virtual keyword. If you do so, Entity
Framework Core will create proxies at run time that derive from your class and track assignments to the
virtual properties of your model.
You can use Code First Migrations to update the database schema automatically to match the changes
you made in your classes without having to recreate the database.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-11
With Code First Migrations, you define the initial state of your classes and your database. After you
change your classes and run the Code First Migrations in design time, the set of changes you performed
over your classes is translated to the required migration steps for the database, and then those steps are
generated as database instructions in code. You can apply the changes to the database in design-time
before deploying the version of the application. Alternately, you can have the application run the
migration code after it starts. Code First Migrations is outside the scope of this course, but you can read
more about it on MSDN:
To map a class to a database table, add the [Table] attribute to the class declaration and specify the table
name. For example, [Table("Products")] maps the class to the Products table. To map a property to a
database column, add the [Column] attribute to the property declaration. For example,
[Column("ProductName")] maps the property to the ProductName column.
Note: By default, Entity Framework Core will use the plural form of the class name when
mapping a class to a database table. For example, the class Product will be mapped to a table
named Products), and properties will be mapped to database columns of the same name. You
should use the [Table] and [Column] attributes only if you want to customize these defaults.
The following example shows how to map a class to a database table by using code-first data annotations.
[Table("GlobalProducts")]
MCT USE ONLY. STUDENT USE PROHIBITED
2-12 Querying and Manipulating Data Using Entity Framework Core
[Column("ProductName")]
public string Name { get; set; }
}
In the preceding code example, the Product class is mapped to a database table named GlobalProducts,
the Id property is mapped implicitly to a database column named Id, and the Name property is mapped
to a database column named ProductName.
The Id property in the preceding code example will be set as the primary key of the table, because the
convention for primary key is that either the property is named Id (or ID, the casing is ignored) or named
after the class, followed by Id, for example, ProductID.
When you map a property to a primary key column, by default, Entity Framework Core will set the value
of the column to be generated by the database automatically. For integer columns, the value will be auto-
incremented; for columns of type GUID, the database will generate a new GUID for each row. If you do
not want to use generated primary keys, and instead you want to provide the primary key value yourself
when creating the entity object, configure the primary key property with the
[DatabaseGenerated(DatabaseGeneratedOption.None)] attribute. To use the DatabaseGenerated
attribute, add a using directive to the System.ComponentModel.DataAnnotations.Schema
namespace.
You can map a foreign key relationship in two ways by using data annotations:
• From the foreign key property to the entity property.
The following code example shows how to set a foreign key of a nested object to a property of your class
by using two approaches.
Note: The preceding code example illustrates a to-one relationship (either one-to-one or
many-to-one) from the enclosing entity to the Course entity. To specify a to-many relationship
(either one-to-many, or many-to-many), change the type of the entity property to
ICollection<T> or IEnumerable<T>.
By having both the foreign key property and entity property for each foreign key relationship, you gain
flexibility. If necessary, you can ask Entity Framework Core to fetch the referenced entity (as shown in the
Course class in the preceding code example) along with the enclosing entity, or you can refrain from
fetching it and rely only on its key, for performance reasons.
Note: If you do not use data annotations, and instead rely on the Code First convention for
foreign keys, you must make sure that the foreign key property is named as the entity property,
followed by Id (casing is ignored). In the preceding example, the entity property is named
Course and the foreign key property is named CourseId, therefore the data annotation
attributes are not required.
Demonstration Steps
You will find the steps in the “Demonstration: Creating an Entity Type, DbContext, and DbInitializer“
section on the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-
Azure-and-Web-Services/blob/master/Instructions/20487D_MOD02_DEMO.md.
You can use the Fluent API by overriding the OnModelCreating method of your DbContext class. The
OnModelCreating method gives you access to a ModelBuilder object, which you use to declare the
association between your domain classes and the database tables, columns, and keys.
The following code example shows how to map a class to a database table, then map the key field of the
class, and then map a property of the class to a database column.
In the preceding code example, the ModelBuilder object is used to map the Product class to the
GlobalProducts table to declare that its Id property is the primary key and to associate the Name
property with the ProductName database column. This achieves the same result as the data annotations
example illustrated in Topic 4, "Mapping Classes to Tables with Data Annotations".
You can also use the Fluent API by using a class that implements the IEntityTypeConfiguration interface
for each domain class you have. You still need to associate the configuration classes with your
DbContext-derived class by using the OnModelCreating method.
The following code example illustrates how to use the Fluent API with a class implements the
IEntityTypeConfiguration interface.
The ProductMapping class in the preceding example implement the IEntityTypeConfiguration generic
interface, and calls numerous methods in the Configure method to associate the Product class with the
GlobalProducts table. This again achieves the same result as using data annotations.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-15
For additional examples of Configuring/Mapping Properties and Types with the Fluent API,
see http://go.microsoft.com/fwlink/?LinkID=313730
Mapping Type Inheritance to Tables
When you work in an object-oriented
environment, you can use inheritance to reflect
real-world relationships. When you work with an
ORM, the inheritance relationships should hold
when you map objects to database tables.
In the examples in this topic, you will see how to implement inheritance for the base class Person and two
inheriting classes: Student and Teacher.
TPT
In the TPT approach, a separate table represents each class. The derived class’ table has a foreign key
property that associates it with the base class’ table. The derived class’ table contains columns only for
properties declared in that class.
To create such an object-relational mapping, use data annotations to give each class a different table
name.
MCT USE ONLY. STUDENT USE PROHIBITED
2-16 Querying and Manipulating Data Using Entity Framework Core
TPH
In the TPH approach, a single table represents the entire inheritance hierarchy. All the inherited types are
represented in the same table. When you map the table to domain classes (such as the Teacher and
Student classes), you only map the relevant properties for each class. This means that the database
representation of a Teacher object will have a null value for the Grade column, which only the Student
class has.
To create such an object-relational mapping, use data annotations to give all classes the same table name.
You can also remove the [Table] attribute from the classes, because this is the default behavior of Code
First for handling inheritance mapping.
Note: When creating the Person table, Entity Framework Code First will add a
discriminator column to the table and use the type names (Person, Student, and Teacher) to
indicate which object type is stored in each row. You need not be aware of the discriminator
column or use it directly.
TPC
In the TPC type approach, each concrete (non-abstract) class is represented in the database as its own
table. As a result, the database schema is not normalized, but mapping the tables to classes is much
easier.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-17
This example shows how to implement inheritance by using the TPT approach. The code defines three
classes named Person, Student, and Teacher. Student and Teacher inherit from Person, and every class
is mapped to a different database table.
TPT example
public class MyDbContext : DbContext
{
public DbSet<Person> Persons { get; set; }
public DbSet<Student> Students { get; set; }
public DbSet<Teacher> Teachers { get; set; }
}
[Table("Person")]
public abstract class Person
{
public int Id { get; set; }
public string Name { get; set; }
public DateTime DateOfBirth { get; set; }
}
[Table("Student")]
public class Student : Person
{
public int Grade { get; set; }
}
[Table("Teacher")]
public class Teacher : Person
{
public decimal Salary { get; set; }
}
Question: Why would you use the Fluent API as opposed to data annotations?
MCT USE ONLY. STUDENT USE PROHIBITED
2-18 Querying and Manipulating Data Using Entity Framework Core
Lesson 3
Querying Data
So far, you learned how to map domain classes in your application to database tables. This lesson explains
how to query data from a database by using SQL and Entity Framework Core.
Lesson Objectives
After completing this lesson, you will be able to:
• LINQ to Objects queries execute in memory on a collection of items, whereas LINQ to Entities queries
are translated to SQL statements and executed in the database.
Note: Every LINQ to Entities query is translated to SQL statements and executed at the
database level as a plain SQL statement by using ADO.NET. This is extremely important for
performance reasons. Executing a LINQ to Objects query on a table with millions of records
requires fetching the entire table into memory, whereas executing a LINQ to Entities query on the
same table can be extremely fast because the query executes on the database server.
This example shows how to retrieve a list of students from the database and filter it by the name of the
student. The context variable is a reference to a custom DbContext-derived class instance, and its
Students property returns a reference to a DbSet<Student> object.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-19
There are some limitations as to which operators and methods you can use in your LINQ to Entities
queries. Because every LINQ to Entities query is translated to SQL and executed on the database server,
some LINQ features and.NET Core methods are not supported by Entity Framework Core. For example,
you cannot use the String.IsNullOrWhiteSpace method and the Last LINQ query operator.
Best Practice: As with LINQ to Objects, queries written with LINQ to Entities are not
executed until they are enumerated, for example, by using foreach, or by calling the ToList or
FirstOrDefault extension methods. If you enumerate a LINQ to Entities query for the second
time, it will execute again in the database. For example, if you invoke the Count method of the
query several times, each invocation will execute the SQL statement again in the database.
Therefore, as a best practice, if you need to use the result of the query more than once, you
should store the result in a local variable.
Demonstration Steps
You will find the steps in the “Demonstration: Using Language-Integrated Query (LINQ) to Entities"
section on the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-
Azure-and-Web-Services/blob/master/Instructions/20487D_MOD02_DEMO.md.
Entity Framework Core is that with Entity Framework Core, the result is automatically translated to the
domain classes, instead of returning a DbDataReader object.
The following code example demonstrates how to execute an SQL query statement with Entity Framework
Core to retrieve objects from the database.
Finally, you can also execute SQL statements that return a single value or no value at all. For example, you
could execute an insert statement to insert a new entity into the database or execute a stored procedure
that returns a scalar value. To execute an SQL statement that returns a scalar value, use the
ExecuteSqlCommand method.
The following example demonstrates how to execute an SQL statement that returns a scalar value by
using the ExecuteSqlCommand method.
Question: Why would you use Entity SQL or direct SQL instead of LINQ to Entities?
Demonstration Steps
You will find the steps in the “Demonstration: Running Stored Procedures with Entity Framework“ section
on the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/blob/master/Instructions/20487D_MOD02_DEMO.md.
Question: When would you invoke stored procedures from your application instead of
performing object manipulations by using Entity Framework Core?
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-21
When using eager loading, Entity Framework Core returns the entire data set in one big round trip to the
database. Eager loading might take longer than multiple small round trips that return only part of the
result, depending on the complexity of the large query. Lazy loading is introduced in Entity Framework
Core 2.1 and must be configured manually.
The following code example demonstrates how to configure lazy loading in the Configure method by
using UseLazyLoadingProxies method
.UseSqlServer(ConfigurationManager.ConnectionStrings["BloggingDatabase"].ConnectionString
);
base.OnConfiguring(optionsBuilder);
}
}
When issuing a query, call the Include method to specify which entities should be eagerly loaded with the
containing entity. This is the most flexible way to instruct Entity Framework Core when you want to use
eager loading and is recommended.
The following code example demonstrates how to use eager loading with the Include method to retrieve
the property contents of the Courses entity along with the Student entity.
To enable lazy loading of your related entities, you need to declare your relationship properties, which
contain references to other entities, as virtual. If you reference a list of related entities, your virtual
MCT USE ONLY. STUDENT USE PROHIBITED
2-22 Querying and Manipulating Data Using Entity Framework Core
property must be of type ICollection<T> or a derivative of it, such as IList<T>. You cannot use lazy
loading with IEnumerable<T>. By setting the properties to virtual, you ensure that Entity Framework
Core derives a new proxy class from the original class and adds the lazy load logic to the property.
If you have non-virtual properties, you can explicitly load them at run time by using the Load method.
The following code example shows how to load a non-virtual referenced entity explicitly.
In the preceding example, the Entry method returns a DbEntityEntry object, which you can use to access
information about the entity type, such as its original values and its state such as unmodified, deleted, and
so on. The DbEntityEntry provides information about the referenced entities and collections through
which you can explicitly load each relation. Similar to the Include method, the Collection and Reference
methods can also use a string parameter instead of the lambda expression.
If you have defined your reference and collection properties as virtual, and you want at some point to
momentarily turn off lazy loading on an entire context, set the LazyLoadingEnabled property of the
DbContext instance to false.
The following code example shows how to turn off lazy loading for the entire context. The context
variable refers to a DbContext object.
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated Time: 30 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD02_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD02_LAK.md.
Lesson 4
Manipulating Data
Until this point, you learned how to query data from a database by using LINQ to Entities, Entity SQL, and
even direct SQL statements. However, querying data is not the whole story. This lesson explains how to
manipulate data by using Entity Framework Core.
Lesson Objectives
After you complete this lesson, you will be able to:
• Added. The entity was added to the context and did not exist in the database.
• Modified. The entity was changed since it was retrieved from the database.
• Unchanged. The entity was not changed since it was retrieved from the database.
• Detached The entity was detached from the context, so that changes to it will not be reflected in the
database.
• Deleted. The entity was deleted since it was retrieved from the database.
You can inspect the state of all the entities that have been changed in some way by using the
DbContext.ChangeTracker.Entries method. This could be useful for logging purposes or for reverting
certain changes in an overridden implementation of the SaveChanges method of the DbContext class.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-25
The following code example demonstrates how you can enumerate all the objects that have been added,
modified, or deleted in an overriding implementation of the SaveChanges method.
Furthermore, from an instance of the DbContext class you can retrieve and modify state information for
any entity that has been loaded into the context by using the Entry method. One use of this would be to
mark an entity as deleted; another use would be to replace the values of an entity with new values
provided externally to your API.
The following code example illustrates how you can modify state information for an entity and how you
can copy the values from one entity to another.
Finally, you can turn change tracking on and off globally by using the AutoDetectChangesEnabled
property of the ChangeTracker property of the DbContext class.
The following code example shows how you can turn change tracking on and off.
If you use the preceding code to turn off automatic change tracking, you will have to call the
DbContext.ChangeTracker.DetectChanges method manually before you save any changes.
Note: Automatic change tracking is enabled by default, but only applies to properties
marked as virtual. Non-virtual properties cannot be derived and therefore Entity Framework Core
cannot detect when the property's value changes.
MCT USE ONLY. STUDENT USE PROHIBITED
2-26 Querying and Manipulating Data Using Entity Framework Core
Adding an entity
using (var context = new MyDbContext())
{
context.Persons.Add(
new Person
{
DateOfBirth = new DateTime(1978, 7, 11),
Name = "John Doe"
});
context.SaveChanges();
}
Deleting Entities
To delete an entity from the database, you use the
DbContext object. When you delete an entity
from a database, the context marks the change
tracking status of the entity as Deleted. When you
call the SaveChanges method, the DbContext
object deletes the entity from the database.
Deleting an entity
using (var ctx = new ProductsContext())
{
var product = (from m in ctx.Products where m.Name == "Orange Juice" select
m).Single();
ctx.Products.Remove(product);
ctx.SaveChanges();
}
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-27
If you already know the primary key of the entity that you want to delete, you do not need to retrieve it
from the database to delete it. You can manually add an entity with the desired primary key to the
context, use the Entry method of the DbContext to access the state of the entity, and then mark it as
deleted.
The following code example shows how to delete an entity from a database without first retrieving it from
the database.
Updating Entities
To update an entity in the database, you can use
the DbContext object and make changes in an
incremental fashion. When you update an entity,
the context marks the change tracking status of
the entity as Modified. When you call the
SaveChanges method, the DbContext object
updates the entity in the database. The exact
procedure of how these incremental updates are
performed depends on the change tracking status.
Updating an entity
using (var context = new MyDbContext())
{
var student = (from s in context.Students where s.Name.ToLower().Contains("john")
select s).Single();
student.Name = "Jonathan";
context.SaveChanges();
}
You can update an entity that is not tracked by the context, such as an entity you received as a method
parameter, by attaching the entity to the context, and then manually setting the entity's state to
Modified.
MCT USE ONLY. STUDENT USE PROHIBITED
2-28 Querying and Manipulating Data Using Entity Framework Core
Note: Updating a detached entity is a common scenario when working with services, because the
updated entity is sent to the service and not loaded from the context.
The following code example shows how to update an entity that is not tracked by the context.
The preceding code uses the Entry method to attach the updatedStudent object to the context, and
then sets the entity's state to Modified. When the context tries to save the attached entity, it cannot
detect which properties were changed, because it does not know the original values of the properties.
Therefore, in this scenario, the SQL statement will update all the columns, even those that have not
changed.
If you are not sure whether the entity you want to update is already tracked or not by the context you are
using, such as when you receive the context as a parameter, do not use the Entry method. If your context
already tracks an instance of an entity, and you call the Entry method with a different instance of the
same entity, an exception will be thrown because the context cannot track two instances of the same
entity. If you do not know whether an entity is tracked or not, you have two options:
1. Use the Find method to load the entity to the context, and then use the
DbEntityEntry<T>.CurrentValues.SetValues method to update the loaded entity with the values of
the updated entity instance. The Find method will first search the context for the entity and if not
found, will load the entity from the database.
2. Search only the entities already loaded by the context for the entity to update, by using the Local
property of the DbSet. If it is found, use the DbEntityEntry<T>.CurrentValues.SetValues method
to update the entity according to the values of the updated entity. If it is not found, use the Entry
method to attach the entity to the context, and then set its state to Modified. By using the Local
property, you can avoid accessing the database if the entity is not found in the context.
This example shows the two ways to update a detached entity if you do not know whether the context
already tracks the entity or not.
// Option 2
var existingStudent = context.Students.Local.FirstOrDefault(r => r.StudentId ==
updatedStudent.StudentId);
if (existingStudent == null)
{
context.Entry(originalStudent).State = EntityState.Modified;
}
else
{
context.Entry(existingStudent).CurrentValues.SetValues(updatedStudent);
}
context.SaveChanges();
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-29
Demonstration Steps
You will find the steps in the “Demonstration: CRUD Operations in Entity Framework“ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD02_DEMO.md.
Question: How do you create or modify a relationship (based on a foreign key) by using
Entity Framework Core?
For example, when you insert an order of a customer into the database, it may consist of multiple update
and insert operations. You might have to insert a record into the Orders table, a record into the Shipping
table, and modify the Inventory table to reflect the inventory changes because of fulfilling the order. If
any of these updates fail—for instance, if the Inventory table update fails because the item is no longer
available in stock—you need to carefully roll back the changes to the Orders and Shipping tables to
make sure you do not have an orphaned order that cannot be fulfilled. Similarly, if the Inventory table
update succeeds but an error occurs while inserting a record into the Shipping table, you must undo the
change in the Inventory table to make sure you do not lose inventory items. To aggravate the matter,
any updates you performed to the Inventory table may have been made visible to other applications, so
another process may have decided that an item is no longer in stock although your order has not been
successfully fulfilled.
MCT USE ONLY. STUDENT USE PROHIBITED
2-30 Querying and Manipulating Data Using Entity Framework Core
Transactions
Transactions address the compensation and visibility issues by providing a scope of operations. A
transaction is a set of operations that runs in a sequence, and if one of the operations fails, the transaction
rolls back, and no operations are committed. You should use transactions if one operation depends on a
previous operation and cannot be committed without verifying that the previous operation was
successful. Also, you should use transactions when visibility is a concern, and you do not want to make a
change visible to other applications until the entire transaction completes.
By default, Entity Framework Core is transactional. When you call the SaveChanges method, it translates
the change set to SQL statements and starts with the BEGIN TRANSACTION SQL declaration. The SQL
transaction is not committed unless all the items are added, updated, or deleted successfully.
The following code example shows how to use the BeginTransaction method with Entity Framework
Core.
// Update an entity
ctx1.SaveChanges();
// Update an entity
ctx1.SaveChanges();
transaction.Commit();
}
}
In the preceding code example, the changes made by the three SaveChanges method calls are
committed to the database (or databases) only when the BeginTransaction block ends, and only because
the entire scope was marked as complete by calling the Commit method.
public MyContext()
{
Demonstration Steps
You will find the steps in the “Demonstration: Using Entity Framework with In-Memory Database“ section
on the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/blob/master/Instructions/20487D_MOD02_DEMO.md.
Available providers:
• SQL Server
• SQLite
• In-memory
• PostgreSQL
• MySQL
• MyCat
• FireBird
Demonstration Steps
You will find the steps in the “Demonstration: Using Entity Framework with SQLite“ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD02_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-33
Repository pattern
Entity Framework Core uses the DbContext class
to implement the Unit Of Work design pattern.
This pattern aggregates changes and commits
them to the database by using the SaveChanges
method. The DbContext class can be used
directly in the code. In some cases in a micro-
services architecture you will want to use the
Repository design pattern. To implement the
Repository design pattern, wrap the DbContext
class with another class. The Repository design
pattern has several benefits:
Repository pattern
public class StudentRepository : IStudentRepository
{
private StudentContext context;
public StudentRepository(StudentContext context)
{
this.context = context;
}
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated Time: 30 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD02_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD02_LAK.md.
Best Practices
• Always use transactions when performing multiple operations that depend on each other, and may
require compensation when they fail in isolation.
• Prefer using LINQ to Entities and not Entity SQL or raw SQL to query the database. This makes your
code less fragile and easier to refactor.
• Beware of lazy loading behavior when you return an entity to a higher layer in your application. If the
DbContext object is disposed and the entity has not been fully loaded, accessing its nested
properties may cause an exception.
• Use the Entity Framework Core Fluent API (instead of data annotations) when you map an existing
object model to a database, and when the object model should not change as a result of the
mapping.
Review Question
Question: Why should you use Entity Framework Core and not direct database manipulation
with SQL statements in ADO.NET?
Tools
• Visual Studio 2017
• SQL Server 2017 Express
Module 3
Creating and Consuming ASP.NET Core Web APIs
Contents:
Module Overview 3-1
Module Overview
ASP.NET Core Web API provides a robust and modern framework for creating Hypertext Transfer Protocol
(HTTP)-based services. In this module, you will be introduced to the HTTP-based services. You will learn
how HTTP works and become familiar with HTTP messages, HTTP methods, status codes, and headers.
You will also be introduced to the Representational State Transfer (REST) architectural style and
hypermedia.
You will learn how to create HTTP-based services by using ASP.NET Core Web API. You will also learn how
to consume them from various clients. After Lesson 3, in the lab "Creating an ASP.NET Core Web APIs",
you will create a web API and consume it from a client.
Objectives
After you complete this module, you will be able to:
Lesson 1
HTTP Services
HTTP is a communication protocol that was created by Tim Berners-Lee and his team while working on
the WorldWideWeb (later renamed to World Wide Web) project. Originally designed to transfer
hypertext-based resources across computer networks, HTTP is an application layer protocol that acts as
the primary protocol for many applications including the World Wide Web.
Because of its vast adoption and the common use of web technologies, HTTP is now one of the most
popular protocols for building applications and services. In this lesson, you will be introduced to the basic
structure of HTTP messages and understand the basic principles of the REST architectural approach.
Lesson Objectives
After you complete this lesson, you will be able to:
• Explain the basic structure of HTTP.
Introduction to HTTP
HTTP is a first-class application protocol that was
built to power the World Wide Web. To support
such a challenge, HTTP was built to allow
applications to scale, taking into consideration
concepts such as caching and stateless
architecture. Today, HTTP is supported by many
different devices and platforms, reaching most
computer systems available today.
HTTP also offers simplicity, by using text messages
and following the request-response messaging
pattern. HTTP differs from most application layer
protocols because it was not designed as a
Remote Procedure Calls (RPC) mechanism or a Remote Method Invocation (RMI) mechanism. Instead,
HTTP provides semantics for retrieving and changing resources that can be accessed directly by using an
address.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-3
HTTP Messages
HTTP is a simple request-response protocol. All
HTTP messages contain the following elements:
• Start-line
• Headers
• An empty line
• Body (optional)
Request Messages
Request messages are sent by the client to the server. Request messages have a specific structure based
on the general structure of the HTTP messages.
An HTTP request
GET http://localhost:4392/travelers/1 HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
Accept-Language: en-US,en;q=0.7,he;q=0.3
User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)
Accept-Encoding: gzip, deflate
Host: localhost:4392
DNT: 1
Connection: Keep-Alive
The first and the most distinct difference between the request and response messages is the structure of
the start-line, called request-lines.
Request-line
This HTTP request messages start-line has a typical request-line with the following space-delimited parts:
• HTTP method. This HTTP request message uses the GET method, which indicates that the client is
trying to retrieve a resource. Verbs will be covered in-depth in the topic Using Verbs later in this
lesson.
• Request URI. This part represents the URI to which the message is being sent.
• HTTP version. This part indicates that the message uses HTTP version 1.1.
Headers
This request message also has several headers that provide metadata for the request. Although headers
exist in both response and request messages, some headers are used exclusively by one of them. For
example, the Accept header is used in requests to communicate the kinds of responses the clients would
prefer to receive. This header is a part of a process known as content negotiation that will be discussed
later in this module.
Body
The request message has no body. This is typical of requests that use the GET method.
MCT USE ONLY. STUDENT USE PROHIBITED
3-4 Creating and Consuming ASP.NET Core Web APIs
Response Messages
Response messages also have a specific structure based on the general structure of HTTP messages.
The HTTP response returned by the server for the above request
HTTP/1.1 200 OK
Server: ASP.NET Development Server/11.0.0.0
Date: Tue, 13 Nov 2012 18:05:11 GMT
X-AspNet-Version: 4.0.30319
Cache-Control: no-cache
Pragma: no-cache
Expires: -1
Content-Type: application/json; charset=utf-8
Content-Length: 188
Connection: Close
{"TravelerId":1,"TravelerUserIdentity":"aaabbbccc","FirstName":"FirstName1","LastName":"L
astName1","MobilePhone":"555-555-5555","HomeAddress":"One microsoft
road","Passport":"AB123456789"}
Status-Line
HTTP response start-lines are called status-lines. This HTTP response message has a typical status-line with
the following space-delimited parts:
• HTTP version. This part indicates that the message uses HTTP version 1.1.
• Status-Code. Status-codes help define the result of the request. This message returns a status-code
of 200, which indicates a successful operation. Status codes will be covered in-depth later in this
lesson.
• Reason-Phrase. A reason-phrase is a short text that describes the status code, providing a human-
readable version of the status code.
Headers
Like the request message, the response message also has headers. Some headers are unique for HTTP
responses. For example, the Server header provides technical information about the server software being
used. The Cache-Control and Pragma headers describe how caching mechanisms should treat the
message.
Other headers, such as the Content-Type and Content-Length, provide metadata for the message body
and are used in both requests and responses that have a body.
Body
A response message returns a representation of a resource in JavaScript Object Notation (JSON). The
JSON, in this case, contains information about a specific traveler in a travel management system. The
format of the representation is communicated by using the Content-Type header describing what is
known as media type. Media types are covered in-depth later in this lesson.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-5
• Port (optional). The port defines a specific port to be addressed. If not present, a default port will be
used. Different schemas can define different default ports. The default port for HTTP is 80.
• Absolute path (optional). The path provides additional data that together with the query describes
a resource. The path can have a hierarchical structure like a directory structure, separated by the slash
sign (/).
• Query (optional). The query provides additional nonhierarchical data that together with the path
describes a resource.
Different URIs can be used to describe different resources. For example, the following URIs describe
different destinations in an airline booking system:
• http://localhost/destinations/seattle
• http://localhost/destinations/london
When accessing each URI, a different set of data, also known as a representation, will be retrieved.
Using Verbs
HTTP defines a set of methods or verbs that add
an action like semantics to requests. HTTP 1.1
defines an extensible set of eight methods, each
with a different behavior. For example, the
following request uses the GET method to retrieve
information about a specific traveler in an airline
traveler system.
In the above example, a method is defined in the first segment of the request-line and communicates
what the request is intended to perform. For example, the GET method used in the request above
communicates that the request is intending to retrieve data about an entity and not trying to modify it.
This behavior makes GET compatible with both properties an HTTP method might have: it is both safe and
idempotent.
• Safe verbs. These are verbs that are intended to not have any side effects on the resource state by
the server other than retrieving data.
• Idempotent verbs. These are verbs that are intended to have the same effect on the resource state
when the same request is sent to the server multiple times. For example, sending a single DELETE
request to delete a resource should have the same effect as sending the same DELETE request
multiple times.
Verbs are a central mechanism in HTTP and one of the mechanisms that make HTTP a powerful protocol.
Understanding what each verb does is very important for developing HTTP-based services. The following
verbs are defined in HTTP 1.1:
HEAD Requests intended to have Safe, Used to check request validity and
the identical result of GET Idempotent retrieving headers information without
requests but without having the message body.
returning a message body.
PUT Requests intended to store Idempotent Used to create and update resources.
the entity sent in the
request URI, completely
overriding any existing
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-7
For more information about HTTP methods, refer to the HTTP 1.1 Request For Comments (RFC 2616).
Methods definition in the HTTP 1.1 Request For Comments (RFC 2616)
http://go.microsoft.com/fwlink/?LinkID=298758&clcid=0x409
For more information about HTTP status codes, refer the HTTP 1.1 Request For Comments (RFC 2616).
HTTP Status-Codes definition in the HTTP 1.1 Request For Comments (RFC 2616)
http://go.microsoft.com/fwlink/?LinkID=298759&clcid=0x409
Introduction to REST
Until now in this module, you have learned how
HTTP acts as an application layer protocol. HTTP is
used to develop both websites and services.
Services developed by using HTTP are generally
known as HTTP-based services.
Today, REST is used to add important capabilities to a service. These capabilities include:
• Service discoverability
• State management
In this lesson, you will learn about these capabilities. For more information about REST, refer Roy Fielding’s
dissertation, Architectural Styles and the Design of Network-based Software Architectures.
Architectural Styles and the Design of Network-based Software Architectures by Roy Fielding
http://go.microsoft.com/fwlink/?LinkID=298760&clcid=0x409
Services that use the REST architectural style are also known as RESTful services. A simple way to
understand what makes a service RESTful is using a taxonomy called the Richardson Maturity Model, first
suggested by Leonard Richardson during his talk during the QCon San Francisco Conference in 2008.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-9
• Level zero services. Use HTTP as a transport protocol by ignoring the capabilities of HTTP as an
application layer protocol. Level zero services use a single address, also known as an endpoint and a
single HTTP method, which is usually POST. SOAP services and other RPC-based protocols are
examples of level zero services.
• Level one services. Identify resources by using URIs. Each resource in the system has its own URI by
which the resource can be accessed.
• Level two services. Uses the different HTTP verbs to allow the user to manipulate the resources and
create a full API based on resources.
• Level three services. Although the first two services only emphasize the suitable use of HTTP
semantics, level three services introduce hypermedia, an extension of the term hypertext as a means
for a resource to describe their own state in addition to their relation to other resources.
For more information about the Richardson Maturity Model, refer to Leonard Richardson’s presentation
and notes.
Leonard Richardson’s QCon 2008 presentation and notes
https://aka.ms/moc-20487D-m3-pg1
Hypermedia
When the World Wide Web started, it strongly affected the way humans consume data. Alongside
abilities, such as remote access to data and the ability to search a global knowledge base, the World Wide
Web also introduced hypertext. Hypertext is a nonlinear format that enables readers to access data
related to a specific part of the text by using hyperlinks. The term hypermedia describes a logical
extension of the same concept. Hypermedia-based systems use hypermedia elements, known as
hypermedia controls, such as links and HTML forms, to enable resources to describe their current state
and other resources that are related to them.
This feed describes different instances of a flight in the BlueYonder Companion app. The Hypermedia
control entry is used here to refer clients to different instances of a specific flight.
This response represents a flight that enables booking in its current state.
{
"Source":{"Country":"Italy","City":"Rome"},
"Destination":{"Country":"France","City":"Paris"},
"Departure":"2014-02-01T08:30:00",
"Duration":"02:30:00",
"Price":387.0},
FlightNumber":"BY001",
"links":[
{
"rel": "booking",
"Link": "http://localhost/flights/by001/booking"
}
]
}
Hypermedia is what differentiates REST from HTTP-based services. It is a simple but powerful concept that
enables a range of capabilities and patterns including service versioning, aspect management, and more
which are beyond the scope of this course. Today, more and more formats and APIs are created by using
hypermedia.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-11
One of the media types supporting hypermedia is the Hypertext Application Language (HAL). The HAL
media type offers link-based hypermedia. For more information about HAL, refer the HAL format
specifications.
Media Types
HTTP was originally designed to transfer
hypertext. Hypertext is a nonlinear format that
contains references to other resources, some of
which are other hypertext resources. However,
some resources contain other formats such as
image files and videos, which required HTTP to
support the transfer of different types of message
formats. To support different formats, HTTP uses
Multipurpose Internet Mail Extensions (MIME)
types, also known as media types. MIME types
were originally designed for use in defining the
content of email messages sent over SMTP.
Media types are made of two parts, a type and a subtype, optionally followed by type-specific parameters.
For example, the type text indicates a human-readable text and can be followed by subtypes such as
HTML, which indicates HTML content and plain indicates a plain text payload.
In addition, the text type gives a charset parameter, so that the following declaration is also valid.
In HTTP, media types are declared by using headers as part of a process that is known as content
negotiation. Content negotiation is not restricted to media type and includes support for language
negotiation, encoding, and more. The following section shows how content negotiation is used for
handling media types.
This request message uses the Accept header to communicate to the server what media types it can
accept.
Host: localhost:4392
DNT: 1
Connection: Keep-Alive
Although the server should try to fulfill the request for content, this is not always possible. Be aware that
in the previous request, the type */* indicates that if text/html and application/xhtml+xml are not
available, the server should return whatever type it can.
This request message uses the Content-Type header to declare what media types it uses for the entity-
body.
HTTP/1.1 200 OK
Server: ASP.NET Development Server/11.0.0.0
Date: Sat, 17 Nov 2012 13:27:20 GMT
X-AspNet-Version: 4.0.30319
Cache-Control: no-cache
Pragma: no-cache
Expires: -1
Content-Type: application/json; charset=utf-8
Content-Length: 188
Connection: Close
{"TravelerId":1,"TravelerUserIdentity":"aaabbbccc","FirstName":"FirstName1","LastName":"L
astName1","MobilePhone":"555-555-5555","HomeAddress":"One microsoft
road","Passport":"AB123456789"}
Media types give the structuring of the HTTP messages. Content negotiation enables servers and clients to
set the expectation for what content they should expect during their HTTP transaction. Content
negotiation is not limited to media types. For example, content negotiation is used to negotiate content
compression by using the Accept-Encoding header, localization by using the Accept-Language header,
and more.
Content negotiation in the HTTP 1.1 Request For Comments (RFC 2616)
http://go.microsoft.com/fwlink/?LinkID=298763&clcid=0x409
Lesson 2
Creating an ASP.NET Core Web API
ASP.NET Core Web API is the first full-featured framework for developing HTTP-based services in.NET
Core. Using ASP.NET Core Web API gives developers reliable methods for creating, testing, and deploying
HTTP-based services. In this lesson, you will learn how to create ASP.NET Core Web API services and how
they are mapped to the different parts of HTTP. You will also learn how to interact directly with HTTP
messages.
Lesson Objectives
After you complete this lesson, you will be able to:
• Describe ASP.NET Core Web API and how it is used for creating HTTP-based services.
• Create routing rules.
• Testability.
• Integration with other relevant frameworks like Entity Framework and Unity.
MCT USE ONLY. STUDENT USE PROHIBITED
3-14 Creating and Consuming ASP.NET Core Web APIs
The WCF Web API team released six preview versions until in February 2012, they were merged with the
ASP.NET team, forming the ASP.NET Web API.
Therefore ASP.NET Core that runs on .NET Core is also a cross-platform and high-performance framework
for building modern backend applications such as web apps that run on cloud and on premise.
Defining Controllers
To create a controller, you must do the following:
• Action Selection. The Controller class is responsible for calling the ActionSelector class that is
responsible to run the action method.
• Applying Filters. ASP.NET Core Web API filters let developers extend the request/response pipeline.
Before running an action method, the Controller class is in charge of applying and running the filters
in the correct order before and after running the action methods.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-15
The Controller class exposes ControllerContext by using the ControllerContext property. In addition,
the Controller class also provides some properties that expose specific data that is a part of the
ControllerContext property such as the Request property that provides access to the HttpRequest
representing the HTTP request for the operation. The HttpRequest class is discussed in-depth in Lesson 4,
Handling HTTP Requests and Responses, of this module.
Additional Reading: Filters are discussed in-depth throughout Module 4. Action and
Exception filters are discussed in Module 4, Lesson 2, Customizing Controllers and Actions.
The method selection can be done based on the requests the HTTP method has used and on the request-
URI. There are several techniques for mapping actions:
Technique
Syntax
[AcceptVerbs(AcceptVerbs.Get)]
[HttpDelete]
public IActionResult Flights(int id)
Note: This convention and the HttpVerb enum support only the GET, HEAD, PUT, POST,
OPTION, PATCH, and DELETE methods.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-17
Routing Tables
ASP.NET Core uses the
Microsoft.AspNetCore.Routing.IRouter
interface to describe the different routes that were
configured before the initialization of the host. A
route contains a URI template and default values
for the template. ASP.NET Core uses routes to map HTTP requests based on their request-URI and HTTP
method to the correlating code in the server.
Defining Routes
ASP.NET Core Web API routes are defined in Startup.cs by using the MapRoute extension method as is
shown in the following code.
This example shows the configuration of a simple route based on the name of the controller.
routeBuilder.MapRoute(
name: "Default",
template: "{controller}/{action}/{id?}",
defaults: new { controller = "Home", action = "Index" }
);
The following headings discuss controllers and actions in-depth because understanding controllers and
actions is important to understanding routes.
When ASP.NET Core Web API receives a request that matches the template in the route, it looks for a
controller that matches the value that was passed in the controller placeholder of the URI template by
name. For example, a URI with the following URI relative path, "api/flights/by001", will be evaluated
against the template defined in the earlier example ("api/{controller}/{id?}"). ASP.NET Core Web API will
look for a controller that is named FlightsController.
This controller maps when the flights value is passed as the value for the {controller} placeholder.
An action definition
public class FlightsController : Controller
{
public IActionResult Get(string id)
{
// Place code here to return an HttpResponseMessage object
}
}
Note: This convention only supports the GET, HEAD, PUT, POST, OPTION, PATCH and
DELETE methods. However, actions also support attribute-based routing as described later in this
lesson.
For parameter bindings, simple types include all .NET primitives with the addition of DateTime, Decimal,
TimeSpan, String, and Guid.
Demonstration Steps
You will find the steps in the “Demonstration: Creating Your First ASP.NET Core Web API “ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD03_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
3-20 Creating and Consuming ASP.NET Core Web APIs
Lesson 3
Consuming ASP.NET Core Web APIs
As with any other application, the ASP.NET Core Web API services need a process to give them a runtime
environment. This runtime must accommodate code that potentially serves many clients. When
developing services, hosting environments provide most of the capabilities needed to service client
requests and maintain a quality of service. You will learn how to consume the service from various client
environments including HTML, JavaScript, and .NET Core.
Lesson Objectives
After you complete this lesson, you will be able to:
Another way to start HTTP requests from a browser is by using HTML forms. HTML forms are HTML
elements that create a form-like UI in the HTML document that lets the user insert and submit data to the
server. HTML forms contain sub-elements, called input elements, and each represents a piece of data both
in the UI and in the resulting HTTP message.
This HTML form lets users submit a new location to the server from a web browser, generating a POST
request.
This HTTP message was generated by submitting the newLocation HTML form.
LocationId=7&Country=Belgium&State=&City=Brussels
The most flexible mechanism to start HTTP from a browser environment is by using JavaScript. Using
JavaScript provides two main capabilities that are lacking in other browser-based techniques:
• Complete control over the HTTP requests (including HTTP method, headers, and body).
• Asynchronous JavaScript and XML (AJAX). Using AJAX, you can send requests from the client after the
browser completes loading the HTML. Based on the result of the calls, you can use JavaScript to
update parts of the HTML page.
Demonstration Steps
You will find the steps in the “Demonstration 1: Consuming Services by Using JavaScript“ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD03_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
3-22 Creating and Consuming ASP.NET Core Web APIs
This code example uses the HttpClient to send a GET request, receive an HttpResponseMessage from
the server, and then read its content as a string.
Although this code provides a simple asynchronous API, it is not common for the client to require string
representation of the data. A more useful approach is to obtain a deserialized object based on the entity
body.
To support serializing and de-serializing objects, HttpClient uses a set of extensions defined in
System.Net.Http.Formatting.dll that is a part of the Microsoft ASP.NET Web API Client Libraries
NuGet package. System.Net.Http.Formatting.dll adds the extension methods to the System.Net.Http
namespace so that no additional using directive is needed.
This code example uses the ReadAsAsync<T> extension method to deserialize the content of the HTTP
message into a list of destinations.
int retries = 0;
while (true)
{
try
{
// base.SendAsync calls the inner handler
HttpResponseMessageresponse = await base.SendAsync(request,
cancellationToken);
HttpClient can get HttpMessageHandler in the constructor and by deriving from DelegatingHandler,
we can concatenate the handlers and have a pipeline.
Demonstration Steps
You will find the steps in the “Consuming Services by Using HttpClient“ section on the following page:
https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD03_DEMO.md.
Question: What are the benefits of HttpClient that makes it more useful than
HttpWebRequest and WebClient?
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-25
Objectives
After you complete this lab, you will be able to:
• Use httpClient, create ConsoleApplication, and then connect to the server by using httpClient.
Lab Setup
Estimated Time: 30 Minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD03_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD03_LAK.md.
Lesson 4
Handling HTTP Requests and Responses
Creating an instance of a class and finding the method to run is not always enough. To provide a real
solution for HTTP-based services, ASP.NET Core Web API must provide additional functionality for
interacting with HTTP messages. This functionality includes mapping parts of the HTTP request to method
parameters in addition to a comprehensive API for processing and controlling HTTP messages. Using that
API, you can now easily interact with headers in the requests and response messages, control status codes,
and more.
Lesson Objectives
After completing this lesson, you will be able to:
• The Entity-body. In some HTTP messages, the message body passes data.
Note: Headers are also used to pass metadata and not as part of the business logic.
Headers data is not bound to methods parameters by default and is accessed by using the
HttpRequest class described later in this lesson.
By default, ASP.NET Core Web API differentiates simple and complex types. Simple types are mapped
from the URI and complex types are mapped from the entity-body of the request. For parameter
bindings, simple types include all .NET primitive types (int, char, bool, and so on) with the addition of
DateTime, Decimal, TimeSpan, String, and Guid.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-27
Retrieve the value of the Accept-Language header by using the Request property
public string Get(int id)
{
var lang = new RequestHeaders(Request.Headers).AcceptLanguage;
var bestLang = (from l in lang
orderby l.Quality descending
select l.Value.Value).FirstOrDefault();
switch (bestLang)
{
case "en":
return "Hello";
case "da":
return "Hej";
}
return string.Empty;
}
This code example creates a new flight reservation and returns an HTTP message that has two important
characteristics: a 201 created status and a Location header with the URI of the newly created resource.
To handle exceptions, you must create a middleware and set the status code, headers, and content that
you want the response to have.
This code example shows how to create a middleware to handle the exception and return a 500 internal
server error response.
Throwing an HttpResponseException
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-29
{
app.UseExceptionHandler(
options =>
{
options.Run(
async context =>
{
context.Response.StatusCode = (int)HttpStatusCode.InternalServerError;
context.Response.ContentType = "text/html";
var ex = context.Features.Get<IExceptionHandlerFeature>();
if (ex != null)
{
var err = $"<h1>Error: {ex.Error.Message}</h1>{ex.Error.StackTrace
}";
await context.Response.WriteAsync(err).ConfigureAwait(false);
}
});
});
}
Demonstration Steps
You will find the steps in the “Throwing Exceptions“ section on the following page:
https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD03_DEMO.md.
Lesson 5
Automatically Generating HTTP Requests and Responses
In the modern age, very few applications are islands. A typical application interacts with tens or hundreds
of internal and external web services, a trend that has further increased with the advent of microservice
architectures. Mashing up undocumented APIs together, or having to understand each vendor’s
documentation in isolation, is an extremely daunting and complicated task. The OpenAPI specification
(formerly the Swagger specification) is a vendor-agnostic format for defining HTTP APIs, which can be
used for generating documentation, HTTP clients, HTTP servers, and mock service implementations.
In this lesson, we will explore the OpenAPI specification and how it can be used to design a modern HTTP
API in a language-independent manner. We will use the Swagger web-based tools for designing and
testing the API and will generate a C# client that can be used by our web services to interact with the API
we just designed, or with a third-party API that uses the OpenAPI specification.
Lesson Objectives
After completing this lesson, students will be able to:
• Design an HTTP API with the OpenAPI specification.
For the full formal details of the OpenAPI specification, see the OpenAPI-Specification GitHub
repository
https://aka.ms/moc-20487D-m3-pg4
The OpenAPI specification provides a standard format for describing an HTTP API. A specification is a
JSON or YAML text file containing numerous sections for describing the API endpoints, parameters,
request bodies, response bodies, status codes, examples, and more. The following are the most common
components you will encounter in OpenAPI specifications:
• General information. Contain the OpenAPI version; the service name, description, and version; the
base URLs for the service.
• Paths. Contain the API endpoints of your service, including relative URL parts, HTTP verbs, and
descriptions.
• Responses. Contain the possible HTTP status codes and response bodies returned by your service,
including their media type.
• Parameters. Contain the variable part of the accessed endpoint, and can be provided in the URL itself
(e.g. /flights/BY001), the query string (e.g. /flights/byId?id=BY001), or the request body.
• Reusable schemas. Contain descriptions of data models received or returned by the API, or
individual parameter, request or response descriptions.
Note: YAML (YAML Ain’t Markup Language) is a text serialization language similar to JSON,
which tries to do away with text elements that make the document harder for humans to parse
and understand. Compared to JSON, YAML is simpler to read because it uses indentation for
nesting, and a simplified format for nested objects, arrays, and strings.
This lesson uses the OpenAPI 3.0 specification, which has numerous useful features and
simplifications to the OpenAPI standard. To learn more about the new features and
differences between OpenAPI 2.0 and OpenAPI 3.0, see:
https://aka.ms/moc-20487D-m3-pg5
The following OpenAPI specification describes an API with a single path that you can access by making a
GET request, which returns a JSON document with a single string value:
schema:
type: object
properties:
message:
type: string
example:
message: Hello, World
The servers object in the preceding document is an array of URLs where the service is accessible. It is
followed by an info object that contains the service version, title, and description. Next is the paths
object, which contains a single endpoint: /hello, which expects a GET request with no parameters. The
only expected response is under the responses object, and it should have an HTTP status code 200 (OK),
have the application/json media type, and a simple schema with a single string property titled message.
There is also an example provided so that API users know what to expect (providing an example also
makes automatic server mocking possible).
If this service was running on the specified URL and conforming to the API above, we could make an HTTP
request to it using PowerShell or cURL, and receive a reply, as follows:
The following example adds a query parameter and a more complex response schema to the OpenAPI
specification discussed above -- the added parts are highlighted in bold:
In the above API, the /echo endpoint expects two query string parameters titled username and count. The
response schema is slightly more complex and consists of an object with three properties.
If this service was running on the specified URL and conforming to the API above, we could make an HTTP
request to it using PowerShell or cURL, and receive a reply, as follows:
An important concern when building real-world OpenAPI specifications is reusing some components. For
example, you can imagine that 404 (Not Found) responses will be quite similar in many cases and to
specify their details in every path can be redundant. Likewise, HTTP request bodies and response bodies
will often have reusable, shared objects -- the data model objects for your service. To address the need for
reusable objects, the OpenAPI specification has a components section, which can contain reusable
definitions for parameters, responses, and object schemas.
The following OpenAPI document specifies a small part of the Blue Yonder Flight Reservations API, with
multiple operations and shared schemas
name: MIT
url: 'http://opensource.org/licenses/MIT'
servers:
- url: https://blueyonder.com/flights-api
paths:
/flights:
get:
description: Returns a list of all flights
responses:
'200':
description: Successfully returned flights
content:
application/json:
schema:
type: array
items:
type: object
properties:
airline:
type: string
flightnum:
type: integer
example:
- airline: Blue Yonder
flightnum: 97
- airline: Blue Yonder
flightnum: 103
/flights/{flightId}:
get:
description: Returns flight information for a flight
parameters:
- name: flightId
in: path
required: true
schema:
type: string
example: BY97
responses:
'200':
description: Successfully returned flight
content:
application/json:
schema:
$ref: '#/components/schemas/Flight'
example:
airline: Blue Yonder
source: Paris
destination: London
departureTime: '21 Mar 2018 08:30:00'
number: 97
'404':
description: No such flight found
/reservations:
post:
description: Creates a new flight reservation
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/Reservation'
example:
airline: Blue Yonder
flightNumber: 97
departureTime: '21 Mar 2018 08:30:00'
passengerName: David Smith
responses:
'201':
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-35
components:
schemas:
Flight:
type: object
required:
- airline
- source
- destination
- departureTime
- number
properties:
airline:
type: string
source:
type: string
destination:
type: string
departure:
type: string
format: datetime
number:
type: integer
Reservation:
type: object
required:
- airline
- flightNumber
- departureTime
- passengerName
properties:
airline:
type: string
flightNumber:
type: integer
departureTime:
type: string
format: datetime
passengerName:
type: string
MCT USE ONLY. STUDENT USE PROHIBITED
3-36 Creating and Consuming ASP.NET Core Web APIs
In the preceding example, pay attention to the items marked in bold. In the components section, we
define the schemas for two object types -- Flight and Reservation. In the paths section, we include
references to these schemas using the special $ref keyword. Also, note the use of the required keyword
to specify which object properties are required (for both request parameters and response content).
To learn more about the OpenAPI specification and how to construct specifications for more
complex services, refer to the tutorial at:
https://aka.ms/moc-20487D-m3-pg6
The following screenshot shows the expanded /flights/{flightId} API, which includes the expected
parameter type and the possible responses:
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-37
After you define your API on the left, you can use the UI on the right to test it, right from the editor. If the
API requires parameters or a request body, you can enter them as well. Finally, the editor makes an HTTP
request on your behalf and displays the results immediately.
The following screenshot shows the test UI, where you can specify the flight ID and then execute the
request:
MCT USE ONLY. STUDENT USE PROHIBITED
3-38 Creating and Consuming ASP.NET Core Web APIs
The following screenshot shows the request that was run and the response that was returned:
In the preceding screenshot, the request is sent to the automatic mock server provided by Swagger Hub
(https://virtserver.swaggerhub.com/…). The mock server makes it very easy to test your API definition
before you have any server implementation available for testing.
To learn more about the Swagger Hub automatic mock server integration, see:
https://aka.ms/moc-20487D-m3-pg7
Demonstration Steps
You will find the steps in the “Testing HTTP requests with Swagger“ section on the following page:
https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD03_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-39
You can find the AutoRest open source project and its documentation on GitHub:
https://aka.ms/moc-20487D-m3-pg8
Note: At the time of writing, AutoRest has full support for OpenAPI 2.0, but it doesn’t have
support for OpenAPI 3.0. If you plan to use AutoRest, make sure to provide an OpenAPI 2.0
document to the tool.
The following command installs AutoRest on your system, provided you have a working Node.js
installation (v7.10.0 or later is required at the time of writing):
To generate a C# client using AutoRest, you provide it a configuration file in the Markdown format. The
configuration file contains the location of your OpenAPI document (JSON or YAML), and any additional
documentation, such that the configuration file can be used as a standalone entry point to building your
service API. AutoRest then generates the client code required to access the service. Note that AutoRest
automatically pulls the code generators you specify (e.g. the C# code generator when specifying --
csharp), so you don’t need to install all of them ahead of time.
The following is a minimal AutoRest configuration file, which specifies the location of an OpenAPI
document and an output directory for the generated client:
```yaml
input-file: hotels_1.0.0_swagger.yaml
csharp:
namespace: BlueYonder.Hotels
output-folder: blueyonder-hotels
```
Note: The > see comment in the Markdown configuration file is required by the AutoRest
tool. It will throw an exception if it is not present.
MCT USE ONLY. STUDENT USE PROHIBITED
3-40 Creating and Consuming ASP.NET Core Web APIs
The following is the OpenAPI definition in YAML format provided to the AutoRest tool:
host: virtserver.swaggerhub.com
basePath: /xoreax/hotels/1.0.0
schemes:
- https
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-41
Note: The operationId attribute attached to each operation is required by the AutoRest
tool and is used for the method names in the generated code. If it is not present, the tool will
throw an exception.
The output directory contains an interface for the service and a service proxy that implements the
interface. Additionally, each request, response, and schema object that is non-trivial gets its own class. For
the above example, the following files were generated:
namespace BlueYonder.Hotels
{
using Microsoft.Rest;
using Models;
using Newtonsoft.Json;
using System.Collections;
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
/// <summary>
/// Blue Yonder hotel reservations service
/// </summary>
public partial interface IBlueYonderHotelReservations : System.IDisposable
{
/// <summary>
/// The base URI of the service.
/// </summary>
System.Uri BaseUri { get; set; }
/// <summary>
/// Gets or sets json serialization settings.
/// </summary>
JsonSerializerSettings SerializationSettings { get; }
/// <summary>
/// Gets or sets json deserialization settings.
/// </summary>
JsonSerializerSettings DeserializationSettings { get; }
/// <summary>
/// Returns a list of hotels
/// </summary>
MCT USE ONLY. STUDENT USE PROHIBITED
3-42 Creating and Consuming ASP.NET Core Web APIs
/// <summary>
/// Returns a specific hotel's details
/// </summary>
/// <param name='hotel'>
/// The hotel ID
/// </param>
/// <param name='customHeaders'>
/// The headers that will be added to request.
/// </param>
/// <param name='cancellationToken'>
/// The cancellation token.
/// </param>
Task<HttpOperationResponse<Hotel>> GetHotelByIdWithHttpMessagesAsync(string
hotel, Dictionary<string, List<string>> customHeaders = null, CancellationToken
cancellationToken = default(CancellationToken));
}
}
To use the generated client in your application, add the files to your project, and then add the
Microsoft.Rest.ClientRuntime NuGet package. Then, create an instance of the service proxy class and
use it directly.
The following example shows how to use the generated client in your C# application:
https://aka.ms/moc-20487D-m3-pg9
Demonstration Steps
You will find the steps in the “Generating C# HTTP Clients by Using AutoRest“ section on the following
page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD03_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
3-44 Creating and Consuming ASP.NET Core Web APIs
Best Practices
• Model your services to describe resources and not functions.
Review Question
Question: What are ASP.NET Core Web API controllers used for?
MCT USE ONLY. STUDENT USE PROHIBITED
4-1
Module 4
Extending ASP.NET Core HTTP Services
Contents:
Module Overview 4-1
Module Overview
ASP.NET Core Web API provides a complete solution for building HTTP services, but services often have
various needs and dependencies. In many cases, you will need to extend or customize the way ASP.NET
Core Web API executes your service. You might need to extend or customize ASP.NET Web API for
handling needs such as applying error handling and logging, integration with other components of your
application and supporting other standards that are available in the HTTP world.
Understanding the way ASP.NET Core Web API works is important when you extend ASP.NET Core Web
API. The division of responsibilities between components and the order of execution are important when
intervening with the way ASP.NET Core Web API executes.
Finally, with ASP.NET Core Web API, you can also extend the way you interact with other parts of your
system. With the dependency resolver mechanism, you can control how instances of your service are
created, giving you complete control on managing dependencies of the services.
Objectives
After completing this module, students will be able to:
• Extend the ASP.NET Web API request and response pipeline.
Lesson 1
The ASP.NET Core Request Pipeline
In this lesson we will learn about the Web API processing architecture and the flow of requests and
responses in it. We will focus on the role of middleware in the pipeline and learn the benefits and the
ways to customize middleware.
Lesson Objectives
After completing this lesson, students will be able to:
Architecture Overview
The ASP.NET Core Web API processing architecture is made of three layers:
• Hosting
• Message handlers
• Controllers
Hosting
The hosting layer is in charge of interacting with the underlying communication infrastructure, creating an
HttpRequest object from the request and sending the object down through the message handling
pipeline to the message handler layer. The hosting layer is also in charge of converting HttpResponse
objects received from the message handlers to HTTP messages sent through the underlying
communication infrastructure.
ASP.NET Core Web API has three implementations for the hosting layer:
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 4-3
• ASP.NET Core Module. This module works with kestrel and is a native IIS module on Windows.
• HTTP.sys. This web server for ASP.NET Core is only for Windows. It has some features that are missing
in the kestrel web server. The missing features include Windows authentication and WebSockets.
Middleware
Middleware are methods that are chained to each other to form a pipeline. Every middleware receives an
HttpContext object and performs some processing on the message before passing it to the next
middleware in the pipeline. This allows ASP.NET Core Web API to separate the concerns for different
processing that must be applied to every message and provides an extensibility point for developers.
Middleware are covered later in this lesson.
Controllers
The final layer in ASP.NET Core Web API is executed by the controllers themselves. When the
OnActionExecutionAsync method of a controller is called, it starts a process that should result in the
execution of an action method processing the request and returning a response. The process is made out
of the following steps:
• Action Selection. The first step for executing an action method is identifying which action should be
executed. Action selection is covered in Module 3, “Creating and Consuming ASP.NET Core Web
APIs”, Lesson 2, “Creating an ASP.NET Core Web API” in Course 20487.
• Creating the Filters Pipeline. Each action can have a set of components called filters associated with it.
Similar to message handlers, filters also provide a way to create a pipeline of processing units but only
for an action and not for the entire host. ASP.NET Core Web API has three types of filters executed in
the following order:
o Authorization filters
o Resource filters
o Result filters
The filters pipeline also contains two other components:
o ModelBinders. The ModelBinders class performs the process of parameter binding and is
executed after the resource filters. Parameter binding is covered in Module 3, “Creating an
ASP.NET Core Web APIs”, Lesson 2, “Creating an ASP.NET Core Web API” in Course 20487.
o ControllerActionInvoker. The Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker
class is in charge of invoking the action method and converts the result to ResponseMessage (if
needed).
ASP.NET middleware
A pipeline of message processing components is a
common pattern in many frameworks that deal
with messages. ASP.NET modules, Connect
middleware (in Node.js) and many other
frameworks all provide components that receive a
request, return a response, and provide
extensibility to a message processing pipeline.
• Use. You allow middleware to run code before and after the next middleware and even decide to
break and not run the next middleware.
• Run. You create a middleware that runs at the end of the pipeline.
…}
The standard way to expose middleware is with the extension method on the IAppBuilder interface.
Use the extension method in the Configure method just like simple middleware.
app.UseCustomMiddleware();
…
}
MCT USE ONLY. STUDENT USE PROHIBITED
4-6 Extending ASP.NET Core HTTP Services
Demonstration Steps
You will find the steps in the “Demonstration: Creating a Middleware for Custom Error Handling“ section
on the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/blob/master/Instructions/20487D_MOD04_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 4-7
Lesson 2
Customizing Controllers and Actions
In this lesson we will learn about asynchronous actions and how they impact the overall performance of
code execution. We will learn and see examples of usage of Filters which provide a mechanism to extend
the pipeline for specific actions or controllers, similarly to middleware. We will learn how to validate
incoming data by using ASP.NET model validators and finally, we will learn how to negotiate with various
types of media, such as XML, JSON and binary by using ASP.NET Core media type formatter.
Lesson Objectives
After completing this lesson, students will be able to:
Asynchronous actions
One of the most powerful capabilities of ASP.NET
Core Web API is the support for building
asynchronous actions. Asynchronous actions
provide a simple-to-use mechanism that you can
use to improve the scalability of services when
performing I/O bound operations.
The preceding code is relatively easy to follow. However, there is one line to which you should pay close
attention. When calling the Client.GetResponse method, the executing thread is blocked while waiting
MCT USE ONLY. STUDENT USE PROHIBITED
4-8 Extending ASP.NET Core HTTP Services
for the response. This blocking behavior is unnecessary, considering the fact that the most of
Client.GetResponse method execution is carried out by the network card and the remote server.
The following code shows an asynchronous service call by using the HttpClient API.
The preceding code uses the await keyword to simplify the call to the asynchronous
HttpClient.GetAsync method. While this code seems sequential during the execution, it is actually
divided into the following steps:
• All the code up to the await keyword is being executed sequentially.
• When calling the HttpClient.GetAsync method, the method immediately returns a task representing
its asynchronous execution and the current thread returns.
• When using the await keyword, the C# compiler generates a continuation method that includes all
the code following the await statement. This code will be used as the continuation of the task
returned by the HttpClient.GetAsync method, which is invoked by the Input/Output Completion
Port (IOCP).
The following code sample shows an asynchronous service call run from inside an asynchronous action.
Demonstration Steps
You will find the steps in the “Demonstration: Creating Asynchronous Actions” section on the following
page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD04_DEMO.md.
Filters
Middleware are applied early on in the ASP.NET
Core Web API pipeline. This is done before
reaching the controller. This means that any
middleware that is configured will be executed for
every request and response.
The following code sample shows an action filter the uses the System.Debugging.Trace call to omit
traces.
• Exception filters. These are classes that implement the ExceptionFilterAttribute class and are used to
handle exceptions. Exception filters are executed after the completion of other filters and only if the
Task returned by the filters pipeline is in a faulted state.
• Result filters. These are classes that implement the ResultFilterAttribute class and are like the action
filters but if the action is faulted the filter will not be executed.
The following code sample demonstrates how to create a result filter.
Model validators
Usually, ASP.NET Core applications store data in
the database. Therefore, you need to validate the
data that comes from the users before doing any
operations. ASP.NET Core has abstractions called
model validators for this purpose. These
abstractions are implemented with attributes that
are derived from ValidationAttribute. There are
built-in attributes for common cases such as
Required, StringLength, and Range.
[Required]
[StringLength(100)]
public string Name { get; set; }
[DataType(DataType.Date)]
public DateTime Birthdate { get; set; }}
}
For custom validation, you can create a custom attribute and use it in the model.
The following code demonstrates populating the SupportedMediaTypes property inside a media type
formatter’s constructor.
Sometimes, the same media type can be supported only by specific types. For example, images might be
a valid media type when requesting a resource for an employee in a company, but not for a department.
The InputFormatter class has the CanReadType and OutputFormatter has the CanWriteType abstract
methods that can be used to define which types can be read or written using the specific media type
formatter.
Finally, you can implement the actual process of reading or writing the data using the
ReadRequestBodyAsync and WriteResponseBodyAsync methods.
The following code demonstrates the use of the WriteResponseBodyAsync method to provide a list of
employees using the CSV file format.
Objectives
After you complete this lab, you will be able to:
Lab Setup
Estimated Time: 30 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD04_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD04_LAK.md.
Lesson 3
Injecting Dependencies into Controllers
Most applications usually consist of several components that are dependent on each other. It is important
to be able to replace the implementation of a dependent module without having to change the code that
uses the dependency. To do this, you first need to decouple the software components from the other
components they are dependent on. This lesson describes how to decouple dependent components from
their dependencies. The lesson also explains how you can use the IServiceCollection interface in ASP.
NET Core Web API to implement dependency injection.
Lesson Objectives
After completing this lesson, students will be able to:
• Describe how to use the ASP.NET Core Web API dependency injection.
Dependency injection
Modern software systems are built out of different
software components. For example, many
distributed applications use a layered architecture
that separates different responsibilities to different
components (Logical Layers of Distributed
Applications are discussed in Module 1, “Overview
of Service and Cloud Technologies,” Lesson 1,
“Key Components of Distributed Applications”).
Dependency injection is a common software
design pattern that is used to decouple software
components from other components they are
dependent on. This is done so that dependencies
could be easily replaced if needed. For example, it is common to replace the dependencies during tests
with a mock object in order to control the result they return.
At the core of the dependency injection design pattern, there are three types of components:
• Dependencies. These are software components that the dependent component is depended upon.
• Injector. A component that obtains or creates instances of the dependencies and passes them to the
dependent component.
In order for the dependent component to be decoupled from its dependencies, it should only define
them as interfaces. The dependencies should be passed into the dependent component as method or
constructor parameters by the injector, allowing the injector to replace the concrete implementation of
the dependency at runtime.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 4-15
• Lifetime. This is the lifetime the of the service that will be created: There are three kinds of lifetime:
o Singleton. The service will have one instance while the application is running.
o Scoped. The service will have one instance for each request.
Registering services in the ConfigureServices method in the Startup class using the Add method and the
AddXXX extension methods.
Registering dependencies
public void ConfigureServices(IServiceCollection services)
{
services.Add(new ServiceDescriptor(typeof(IService), typeof(ServiceImple),
ServiceLifetime.Singleton));
services.AddSingleton<IService1, ServiceImpl1>();
services.AddScoped<IService2, ServiceImpl3>();
services.AddTransient<IService3, ServiceImpl3>();
}
Constructor injection
public class ValuesController : Controller
{
private IService service;
• Action injection. When a specific action needs a dependency, it can be resolved by adding a
parameter to the action and decorating it with the FromServices attribute.
Resolve a dependency in the action method by adding a parameter decorated with FromServices
attribute.
Action injection
public class ValuesController : Controller
{
[HttpGet]
public IEnumerable<string> Get([FromServices]IService service, string name)
{
return new string[] { "value1", "value2" };
}
}
Module 5
Hosting Services
Contents:
Module Overview 5-1
Lab B: Host an ASP.NET Core Web API in an Azure Web App 5-16
Lesson 3: Packaging services in containers 5-17
Lab C: Host an ASP.NET Core service in Azure Container Instances 5-30
Module Overview
The most important aspect of implementing a service is hosting it so that clients can access it. For
Microsoft ASP.NET Core services, the host is responsible for allocating all the resources required for the
service. The host opens listening ports, creates an instance of a service when a request arrives, and
allocates memory and threads as required. If the host fails, the service fails. There is a one-to-one
dependency between the host and the service. The reliability and performance of the host directly affect
the quality of the service.
You can self-host your ASP.NET Core services. In this module, you will explore the various ways of hosting
your services on-premises and on Azure, and the benefits each type of host provides, in relation to issues
such as reliability, performance, and durability.
Apart from deciding the type of hosting service to use, web-hosted or self-hosted, you also need to think
about the hosting environment for your service - whether on-premises or in the cloud platform.
Considerations for deciding which environment to use include:
• Specific hardware requirements. When you host services on-premises, you have more control over the
hardware of your server than in the cloud platform. In the latter case, you only know how many
Central Processing Units (CPUs), memory, and disk space your virtual machines have.
• Scaling requirements. When you host services on-premises, it requires usage prediction and servers.
Other than the costs involved with over-provisioning, on-premises hosting can also be impacted by
under-provisioning caused by rapid growth and unpredictable increase in demand. Hosting your
services in the cloud environment makes your servers available by using the elasticity of the cloud
platform to scale out when more resources are required.
MCT USE ONLY. STUDENT USE PROHIBITED
5-2 Hosting Services
• Legal requirements. In some countries, certain types of data, such as personal data, can only be stored
within the boundaries of the country. For on-premises hosting, this is achieved easily, but when you
host your services and data in the cloud platform, your data might be copied between data centers in
different locations on the globe, for reasons such as availability and backup.
Your decisions related to hosting type and hosting environment, although seemingly independent of each
other, can affect each other. For example, if you choose to host your services in the Microsoft Azure cloud
environment, you need to choose between hosting your services in Azure Web Apps or Docker containers,
or use Azure Functions.
Note: The Azure portal UI and Azure dialog boxes in Microsoft Visual Studio 2017 are
updated frequently when new Azure components and SDKs for .NET are released. Therefore, it is
possible that some differences will exist between screenshots and steps shown in this module and
the actual UI you encounter in the Azure portal and Visual Studio 2017.
Objectives
After completing this module, you will be able to:
• Host services on-premises by using Windows services and Microsoft Internet Information Services
(IIS).
• Host services in the Azure cloud environment by using Web Apps, Docker containers, and Azure
Functions.
Lesson 1
Hosting services on-premises
When you want to host a web service on-premises, you can host it by using a Windows service or IIS. A
Windows service is a long-running application that runs in the background. Windows services have no
user interfaces, and they do not produce any visual output. Services run in the background while a user
performs any other task in the foreground, but they also run when a user is not logged on. This makes
Windows services a good candidate for classic server applications, such as an email server or a File
Transfer Protocol (FTP) server.
Running a Windows service without a user interface poses a debugging and operations challenge because
the user is not notified about warnings or errors. To overcome this, Windows services use the Windows
Event Log service and other logging frameworks to record tracing information and to notify the system
administrator about error conditions.
Lesson Objectives
After completing this lesson, you will be able to:
For more information about service user accounts, refer to the following link.
Demonstration Steps
You will find the steps in the “Demonstration: Hosting Services On-Premises by using Windows Services
with Kestrel (RunAsService)” section on the following page:
https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD05_DEMO.md
• Shuts down your service if it is idle for a long time, to conserve resources.
• Starts your service after shutdown, when a message arrives.
• Recycles your service if it uses too much CPU or memory over time.
• Protects your service with rapid fail protection if your service fails or is unresponsive for a long time.
IIS uses a hierarchical-style directory management, where each virtual directory maps to a folder in the file
system. This virtual directory contains static files such as images and web pages, in addition to web
applications such as ASP.NET Core services. Because of the ability of the IIS to host multiple web
applications on a single server, you can deploy several ASP.NET Core services to IIS, each of them running
independent of each other.
Note: When different web applications share the same application pool, these applications
also share the same worker process. If one of the services causes its worker process to fail (for
example, because of a critical exception), all the hosted applications in the worker process will
also fail. To prevent such a scenario, consider separating web applications into different
application pools.
For more information about IIS architecture, refer to the following link.
Demonstration Steps
You will find the steps in the “Demonstration: Hosting ASP.NET Core Services in IIS” section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD05_DEMO.md.
Lifetime Service process lifetime is IIS shuts down idle services to improve
controlled by the operating resource management. The service is
system, and is not message- reactivated when a message is
activated. received.
Endpoint Address Configured in the app.config file Bound to the IIS virtual directory path,
which contains the .svc file.
For more information about hosting options, refer to the following link.
Hosting Services
https://aka.ms/moc-20487D-m5-pg5
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-7
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated Time: 15 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD05_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD05_LAK.md.
Lesson 2
Hosting Services in Azure Web Apps
Module 1: “Overview of Service and Cloud Technologies” discussed various Azure Cloud Services, which
help develop and publish web applications at a global, distributed scale. Because web services and web
applications are instrumental to every system’s success, Azure has a number of ways to host web services,
based on your specific requirements. It can be as easy as writing a few lines of code in a single function
and deploying that function to Azure, or as flexible as building a complete virtual machine environment
running your favorite web server and web application.
In this lesson, you will explore the options available for hosting ASP.NET Core web services in Azure and
focus specifically on web apps running in Azure App Service. You will also explore the Azure App Service
features for sizing, scaling, and publishing your web service, and which other languages and platforms are
supported on Azure App Service, in addition to ASP.NET Core.
Lesson Objectives
After completing this lesson, you will be able to:
• Develop and publish ASP.NET Core web services to Azure App Service.
Hosting options for ASP.NET Core web services in Azure: Azure Web Apps,
Azure Container services, Azure Functions
As mentioned earlier, there are numerous ways to
host web services on the Azure cloud platform,
which greatly vary in complexity, flexibility, and
customization options. Azure provides both PaaS
(platform-as-a-service) and IaaS (infrastructure-as-
a-service) options for hosting web services, but
this lesson will focus mostly on PaaS. The
advantages of having the Azure cloud platform
managing the underlying infrastructure are
numerous: you don’t have to worry about
provisioning virtual machines and installing web
server software, you don’t have to worry about
operating system updates, you can very easily scale your application without manually deploying
additional machines, and so on.
The main alternatives for hosting a web service or a web application on Azure are:
• Azure Web Apps (Azure App Service). A powerful and flexible PaaS platform that provides automatic
scaling, easy deployment, and a fair amount of customization options. You don’t control the
underlying infrastructure and operating system, but you do control your application’s deployment,
dependencies, and configuration. You can size your service according to its current workload
demands, and then scale it to more powerful machines or a larger number of instances with only a
click of a button.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-9
• Azure Functions. A scalable and flexible serverless platform. You give up additional control over the
underlying platform, but gain the ability to develop your application as a set of tiny building blocks
that are very easy to test in isolation and compose together. Azure Functions are discussed in Lesson
4: ‘Implementing serverless services’.
• Azure Container Service. An orchestration platform for applications deployed using containers. You
package your application and its dependencies into a container image, which can run anywhere and
scale as necessary. Containers and Azure Container Instances are discussed in Lesson 3: ‘Packaging
services in containers’.
• Microsoft Azure Virtual Machines. An IaaS platform for running virtual machines with predefined or
customized images. You have complete control over the execution environment, including unfettered
admin access to the target machines. On the other hand, you have to worry about operating system
updates, installations, dependencies, scaling, and more.
• Azure Cloud Services. A deprecated PaaS offering for running service deployments on top of virtual
machines managed by the Azure platform. Although Azure Cloud Services were among the first
features of the Azure platform, it is recommended that most customers migrate to either higher-level
PaaS solutions, such as using Azure App Service, or lower-level IaaS solutions with Azure Virtual
Machines.
For more information on publishing an ASP.NET web application to Azure Virtual Machines,
refer to the following tutorial.
https://aka.ms/moc-20487D-m5-pg6
When choosing a hosting environment for your web application or web service on Azure, there are many
areas of overlap. After completing this module, you will have a better picture of the available offerings
and how they can be customized and adapted to your system’s needs. Some general things to consider
include:
• Do you need a complete control over the execution environment, such as Remote Desktop or SSH
access? If so, consider using Azure Virtual Machines.
• Can you build your application as a set of standalone, independent, scalable functions that rely on
other Azure services for state storage? If so, consider using Azure Functions.
• Do you already package or plan to package your application as a Docker container? Are you
considering using container orchestration platforms such as Kubernetes? If so, consider using Azure
Container Service.
• Are you building a standard web application or service (API) that doesn’t meet any of the above
criteria? If so, consider using Web Apps feature of the Azure App Service.
MCT USE ONLY. STUDENT USE PROHIBITED
5-10 Hosting Services
• Support for multiple languages and development frameworks. You can use ASP.NET Core, Node.js,
Java, Python, PHP, and Ruby—and these are just the officially supported runtimes. For the target
platform, you can choose between Windows IIS and Linux.
• Powerful monitoring and diagnostics platform. Includes automatically-collected performance metrics,
support for diagnostic log streaming, web server log collection, and a troubleshooting console that
can inspect files, processes, and other types of information on the actual machine running your
service.
Additional useful features include SSL support, custom domains, IP address restrictions, integrations with
other Azure services, security and compliance, and many others.
Note: Although it appears that the Azure App Service provides a granular control over the
target machine, there are actually some restrictions on what your code can do in the
environment, even if you’re using a plan that assigns you a dedicated virtual machine. For
example, the user account that runs your application is not assigned administrator privileges,
which means there are some types of privileges not available to it. The capabilities that are not
available include full Windows registry access, using Event Tracing for Windows, and
reconfiguring low-level network settings.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-11
For more information about the operating system functionality available to application and
services in Azure App Service, refer to the following link.
https://aka.ms/moc-20487D-m5-pg8
When you create a new Azure Web App, you also create (or choose) an Azure App Service plan in which it
runs. You can share multiple applications and services in a single App Service plan. The plan defines in
which region your computer resources will be provisioned, how many machines will serve your traffic, and
the size of these machines. There are multiple pricing tiers that you can choose from, which determine the
resources and features available to your application:
• Free. This is the lowest tier. Your application or service runs on a shared machine, which also runs
other customers’ applications. Your service is assigned a CPU quota (60 minutes a day), memory
quota, disk space quota, and network traffic quota that you cannot exceed. When the quota is
exceeded, your application becomes unavailable until the end of the billing period. You should only
use the Free tier for development and testing, or for very low-traffic applications and services.
• Shared. Your application or service still runs on a shared machine, and is still assigned a CPU quota of
240 minutes a day that you cannot exceed. Other resources, such as networking, are charged using
standard pricing. You also get access to some features, such as custom domains for your service,
which are not available in the Free tier.
• Dedicated. Your application or service runs on dedicated virtual machines, shared only with other
apps in your App Service plan. You can control the size of the machines (see below) and the number
of machines to deploy, up to 20 instances. Even more features are available in this tier, such as
deployment slots (discussed in Module 6: “Deploying and Managing Services, Lesson 5: “Deploying to
Staging and Production”), Traffic Manager integration (discussed in Module 10: ‘Scaling Services’,
Lesson 3: ‘Azure Application Gateway and Traffic Manager’), automatic backups, and others.
• Isolated. Similar to the Dedicated plan, but your virtual machines are also part of a separate Microsoft
Azure Virtual Network, which means they are isolated in terms of network and not only computers.
You can further control the size and number of the machines, up to 100 instances.
When using the Dedicated and Isolated tiers, you can control the size of the virtual machines running
your application and the number of machines that are created for you. Below are some examples:
• In the Dedicated - Basic plan, you can scale to up to 3 machines. The machine size ranges from 1
core with 1.75 GB RAM to 4 cores with 7 GB RAM.
• In the Dedicated - Standard plan, you can scale to up to 10 machines. The machine size ranges from
1 core with 1.75 GB RAM to 4 cores with 7 GB RAM.
• In the Dedicated - Premium plan, you can scale to up to 20 machines. The machine size ranges from
1 core with 3.5 GB RAM to 4 cores with 14 GB RAM. The CPUs in these machines are faster than those
in the Standard plan, and they are equipped with SSD storage.
• In the Isolated plan, you can scale to up to 100 machines. The machine size ranges from 1 core with
3.5 GB RAM to 4 cores with 14 GB RAM. There is also an additional flat fee for each App Service
Environment when using this plan.
For more information about the features included in each Azure App Service pricing plan,
refer to the following link:
https://aka.ms/moc-20487D-m5-pg9
MCT USE ONLY. STUDENT USE PROHIBITED
5-12 Hosting Services
For more information on Azure App Service pricing, refer to the following link:
https://aka.ms/moc-20487D-m5-pg10
The following image shows the Web App Create dialog box.
The following screenshot illustrates the process of creating a new Azure App Service plan. You specify the
location and the pricing tier for the plan and can choose the exact combination of resources and prices
that you require.
The following screenshot shows the Overview blade for the newly created web app. Note the deployment
details on the right, which you can use to deploy through FTP or configure other forms of deployment.
FIGURE 5.5: THE SCALE OUT BLADE IN THE APP SERVICE SETTINGS
Note: By using Visual Studio 2017, you can also integrate your application with additional
Azure services, such as Azure SQL Database, discussed in Module 7: ‘Implementing Data Storage
in Azure’, Lesson 3: ‘Working with Structured Data in Azure’. Additional deployment options from
Visual Studio to Azure App Service deployment slots are discussed in Module 6: ‘Deploying and
Managing Services’, Lesson 5: ‘Deploying to Staging and Production’.
MCT USE ONLY. STUDENT USE PROHIBITED
5-14 Hosting Services
The following screenshot shows the Visual Studio Publish dialog box, configured to create a new
Azure Web App and publish the current project to it.
[HttpGet]
public Flight GetFlightById(string id)
{
bool isProduction = bool.Parse(_configuration["IsProduction"]);
// … The rest of the code
}
}
For more information on how to merge the environment variables into the configuration
available to your ASP.NET Core application, refer to the following link:
https://aka.ms/moc-20487D-m5-pg11
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-15
The following screenshot illustrates how you can modify environment variables and application
configuration settings for your Azure Web App.
Demonstration Steps
You will find the steps in the “Demonstration: Hosting ASP.NET Core Web APIs in Web Apps“ section on
the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/blob/master/Instructions/20487D_MOD05_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
5-16 Hosting Services
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated Time: 30 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD05_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD05_LAK.md.
Lesson 3
Packaging services in containers
Since their popularization by Docker Inc., containers have quickly become the de-facto industry standard
for packaged software delivery. By using containers, developers can package the application along with all
its dependencies, while administrators can deploy and monitor the application across a variety of
infrastructures.
In this lesson, you will explore the benefits of container technologies, the fundamentals of Docker
containers, and Docker integration in Visual Studio 2017. You will use Visual Studio 2017 to create our first
Docker container running an ASP.NET Core application and publish it to Azure Container Instances,
Microsoft’s lightweight cloud solution for hosting individual containers quickly and efficiently.
Lesson Objectives
After completing this lesson, you will be able to:
• Explain how OS virtualization differs from hardware virtualization.
that use Java on Ubuntu Linux are deployed in two separate containers, the Ubuntu files and the Java
Virtual Machine installation will be shared between the two containers on the disk.
On the other hand, the isolation provided by container runtimes is not as hermetic as that provided by
hardware virtualization; from many perspectives, containers should not be considered a security
boundary.
Note: Historically, containers have been available in one shape or another for several
decades. For example, Solaris zones were released in early 2005, and are a fairly comprehensive
containerization technology. The Linux kernel mechanisms used by Docker (control groups,
namespaces, and security modules) were also available for several years before Docker’s
popularity exploded. Still, Docker was able to bring containers into the mainstream by creating
an easy-to-use solution that makes container technology approachable and usable by typical
developers and administrators.
The operating system primitives that isolate containers from each other are different from the primitives
that isolate virtual machines. Each container has its own view of the file system, its own list of processes,
and its own network interfaces, even though at the operating system level, these are all shared between
containers. The mechanisms for restricting container resource utilization are also different from those used
for virtual machines. You can limit the CPU usage of a container (For example, assign 50% of one CPU
core to a container), the memory usage of a container, and even the disk reads and writes it can perform
to a specific disk device. These mechanisms are provided by the operating system. However, when using
virtual machines, these would have to be provided by the hardware virtualization mechanism (a
hypervisor).
• Consistent environment from dev to prod. Developers can package software and versioned
dependencies into a container image that is used consistently in the development environment, in
testing, in staging, and in production. This results in productivity gains, as teams don’t spend time
diagnosing issues resulting from environmental differences or having different versions of software
dependencies and libraries.
• Isolation. Although containers are lightweight and share numerous resources with each other, they
are still quite isolated. Containers cannot accidentally access each other’s files, processes, or in-
memory objects; they also can’t send each other network traffic without explicit configuration.
• Infrastructure as code. The Docker image format makes it possible to create a container image that
can be run anywhere, on any operating system or distribution. All major cloud providers, including
Microsoft Azure, provide services for hosting and orchestrating container-based deployments. To
create Docker images, you will often use Dockerfiles, which is a simple text language that describes a
step-by-step process of creating a container image from a base image by adding files, installing
required software packages, and setting environment variables.
Among the ways containers are used by leading companies today are the following use-cases,
highlighting the unique advantages of container-based systems:
• Distributed applications and micro-services. Containers make it easy to deploy your system as a mesh
of interconnected micro-services, responsible for small slices of your application’s functionality. Each
micro-service can be developed, tested, deployed, and versioned independently of the others, and
isolated using container technology.
• Batch jobs. You can create a standalone batch job and package it into a container image. The
resulting container image can then be deployed across a variety of pipelines, and run in parallel very
easily.
Continuous integration (CI). You can use containers in your CI/CD pipeline to build your application, test it
in isolation, and then deploy it to production with the same consistent environment used by developers
on their personal machines. The build artifact from the CI pipeline can be a versioned container image,
which can be deployed elsewhere for debugging and reproducibility when required.
When you install Docker on your machine, the Docker engine runs in a service process, and you
can interact with it using the Docker client application, which is a command-line tool
(docker.exe on Windows, docker on other platforms). Other ways to interact with the Docker
engine include Kitematic (a GUI application) and Visual Studio 2017. In the next topic, you will
learn that Visual Studio 2017 has a comprehensive set of tools for interacting with Docker, which
you can use when developing and subsequently publishing your ASP.NET Core application.
To learn more about Kitematic and download it to your machine, visit the following link.
https://aka.ms/moc-20487D-m5-pg15
In Docker’s terminology, a container image is a packaged application with all its dependencies,
configurations, and executable code. It is implemented as a simple tar archive, which—if extracted—forms
a set of one or more files on disk. A container or a container instance is a running instance of an image;
you can create multiple instances from the same image on a single machine or on multiple machines.
The following command launches the hello-world container image in your terminal and displays its
output.
docker ps. Lists the currently running containers, including their names, network ports, and identifiers.
docker kill. Kills a currently running container, but doesn’t delete the container files from disk. A
container killed with docker kill can be subsequently restarted with docker start.
docker rm. Deletes a container’s files from disk. This action cannot be undone. If you need your container
to store persistent data, you should use volumes, which are outside the scope of this course.
Docker volumes
You can store Docker images only on your local machine, but this is uncommon. In most cases, a
container registry stores versioned container images that can be pulled to a machine for execution. You
can push an image to a container registry after building it. As discussed in the previous topic, a common
workflow is to have your CI build server produce a versioned container image and push it to the container
registry used by the rest of your infrastructure, including the deployment process.
Before you push a container image to a registry, you need to tag it. The docker tag command will add a
tag to an image, and the docker push command will push it to a container registry. The default container
registry is Docker Hub, which you can use to store an unlimited number of container images for free as
long as they are publicly accessible. Private container registries are also available from multiple vendors,
including Microsoft’s Azure Container Registry. You can even run your own registry in a container using
Docker’s official registry container image.
Docker Hub
https://aka.ms/moc-20487D-m5-pg17
Selecting the right container image has a big effect on your disk and memory footprint. For example,
using a .NET Core SDK image with build tools when you only need the .NET Core runtime is wasteful, and
can take hundreds of megabytes of additional space. Some important images for running .NET Core and
ASP.NET Core applications including the following, are distributed by Microsoft to the Docker Hub
container registry.
microsoft/dotnet:runtime Linux (Debian Stretch) .NET Core runtime files can be used
Windows Nano Server for launching (but not building)
.NET Core applications.
microsoft/dotnet:runtime-deps Linux (Debian Stretch) Only the Linux libraries required for
running self-contained .NET Core
applications (does not include the
.NET Core runtime or the SDK).
Note: Most of the container images in the above list have the exact same names across
different operating systems. For example, when you use Docker on Windows with Windows
Containers and pull the microsoft/dotnet:runtime tag, you will get a Windows Nano Server
container; but if you use Docker on Windows with Linux Containers (or Docker on Linux or
macOS), you will get a Linux container running Debian.
If you plan to use container images built by others, such as Microsoft’s ASP.NET Core container images,
you will only need to copy in your application’s files and configure environment variables, volumes, and
networking ports. However, if you plan to build your own container images; for example, to have more
precise control over software installed in the container, you will need to write a Dockerfile.
A Dockerfile is a simple text specification containing instructions for building a container image. The new
image is always based on an existing image (even when using the empty scratch image), and can
customize that image with additional software installations, environment variables, files, and arbitrary
commands.
The following Dockerfile is based on the microsoft/aspnetcore:2 image. It adds the application’s binary
files from the host’s current directory to the /app directory in the container, and specifies that the
command for launching the container is dotnet:
For a full reference of the Dockerfile language, visit the following link.
https://aka.ms/moc-20487D-m5-pg18
To build a Dockerfile, you use the docker build command, which produces a new container image that
you can then run or push to a container registry. The docker build command sends the build context to
the Docker engine along with the Dockerfile; by default, the build context includes all the files under the
current directory. This is important when the Dockerfile references some of the files in the build context.
For example, to copy application binaries or configuration files into the resulting container image using
the ADD or COPY commands.
Note: You can use a .dockerignore file to specify which files should not be sent as part of
the build context. This is similar to the .gitignore file used by the Git source control system.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-23
Use the following commands to build a new container image with a local tag from the Dockerfile in the
current directory (.), tag it with a specific user tag, and push it to Docker Hub:
Note: The preceding code snippet used a fairly common format for image tags, where the
tag follows a version number. The version number is completely up to you, and it is not
consumed or used automatically by Docker’s tools.
Demonstration Steps
You will find the steps in the “Demonstration: Creating an empty ASP.NET Core Docker container“ section
on the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/blob/master/Instructions/20487D_MOD05_DEMO.md.
To install Docker on your Windows machine alongside Visual Studio 2017, refer to the
following link:
https://aka.ms/moc-20487D-m5-pg19
When you create a new ASP.NET Core project with Docker support enabled, or when you add Docker
support to an existing project, Visual Studio creates a number of files for you:
• Dockerfile. Contains instructions for creating a container image that hosts your web application,
based on Microsoft’s official Docker image.
• Docker-compose.yml. A Docker Compose manifest file (see below) that helps bring your application
up along with any dependencies, network port mappings, volumes, and environment variables.
MCT USE ONLY. STUDENT USE PROHIBITED
5-24 Hosting Services
Additionally, Visual Studio 2017 configures the build process such that when you build your project, the
Docker client is invoked to build your Docker container image; and when you launch your project, the
Docker client is invoked to launch that container and attach a debugger to it.
The following screenshot illustrates the Visual Studio New Project wizard with the Docker support check
box.
The following docker-compose.yml file describes an ASP.NET Core web application container and a
linked Redis Cache container, which will be brought up or down as a single logical unit.
In the preceding example, the Redis Cache container will be deployed directly from the upstream
redis:alpine image, which will be pulled from Docker Hub. The web application container will be built
from the current directory, and have its port 5000 forwarded to port 5000 on the host.
The preceding Dockerfile consists of multiple sections, which specify instructions for a Docker multi-step
build. Multiple intermediate containers will be generated, but only the last FROM section specifies which
container image is the result of the build:
• FROM … AS base. This section declares the first build step, which is an exact clone of the
microsoft/aspnetcore:2.0-nanoserver-1709 container image, but it adds the /app directory and
exposes port 80.
• FROM … AS build. This section declares another build step, in which you use a different container
image as base: the microsoft/aspnetcore-build:2.0-nanoserver-1709 container image. This image
is designed for building ASP.NET Core web applications, and not just hosting them, so it contains the
compiler, build tools, and everything else required for building applications. The subsequent
instructions specify that a /src directory should be created, the solution and project files copied in,
and then the dotnet restore step runs to restore NuGet packages. Finally, the application source files
are copied in and the dotnet build step runs to build the application and copy the results to the /app
directory.
• FROM build AS publish. This section is based on the previous step and runs the dotnet publish
command, which finalizes the application for deployment and copies the resulting files to the /app
directory. Note that this container is still based on the aspnetcore-build image, which contains the
ASP.NET Core build tools.
MCT USE ONLY. STUDENT USE PROHIBITED
5-26 Hosting Services
• FROM base AS final. This section is based on the base image (the first build step), which does not
contain build tools and is designed for running a packaged application. The COPY command copies
the /app directory contents from the publish image, and then declares that the entry point for the
ASP.NET Core application is dotnet HelloWebApp.dll. Note that the dotnet command is part of the
aspnetcore image.
When you build the project in Visual Studio 2017 or launch it for debugging, Visual Studio 2017 launches
the Docker client and builds the container image. You can see the build steps in the Output window, as in
the following screenshot.
Visual Studio Output window, showing the build steps of the Docker container.
The following screenshot shows the Visual Studio Pick a publish target dialog box for publishing into a
container registry.
Demonstration Steps
You will find the steps in the “Demonstration: Publishing into a Container“ section on the following page:
https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD05_DEMO.md.
Azure provides multiple container-related services that can run containerized workloads at scale. These
include the following:
• Azure Container Service (ACS). Helps run a cluster of virtual machines hosting containers and
container orchestrator nodes. ACS supports Datacenter Operating System (DC/OS), Kubernetes, and
Docker Swarm as orchestrators, but requires quite a bit of manual management for provisioning,
updating, and maintaining the cluster. Azure Kubernetes Service (AKS) provides a managed
Kubernetes cluster, where the management and worker nodes are managed by the Azure platform.
You only need to specify the number of worker nodes you want, and everything else—from
provisioning to upgrades—is managed by the platform. Even though ACS and AKS dramatically
simplify the process of productionizing your container-based application, they require some
background in container orchestration concepts, such as Kubernetes pods, services, and replicas (for
Kubernetes clusters).
• Azure Container Registry. Helps securely store versioned container images in an Azure-hosted
registry, and makes them accessible to your other Azure services. You can publish Docker container
images to Azure Container Registry from Visual Studio 2017, or from your build pipeline in Visual
Studio Team Services and other tools.
MCT USE ONLY. STUDENT USE PROHIBITED
5-28 Hosting Services
• Azure Batch. Helps run large-scale batch jobs on Azure’s compute infrastructure. Although Azure
Batch supports non-container workloads as well, it now has first-class support for containers, so you
can package your batch job in a container image and ship it to Azure Batch for massively parallel
execution.
• Azure App Service. Helps run web applications without worrying about the underlying infrastructure.
In previous modules, you learned to deploy web applications directly to App Service without using
containers. However, App Service supports containerized applications, so that instead of deploying
code or Web Deploy packages, you can publish a container image to App Service and have it hosted
on the App Service environment.
• Azure Container Instances. The newest container-related offering of the bunch, helps create and run
container instances without worrying about cluster orchestration, management nodes, and other
concerns that arise from coordinating thousands of container instances. Azure Container Instances is
suitable for simple container-based workloads, where you want to rapidly deploy a handful of
container images and make them accessible with a public IP address.
To run an ASP.NET Core web application in Azure Container Instances, you need to publish your
application’s container image to a container registry, such as Azure Container Registry. You can use the
Visual Studio Publish wizard to create a new Azure Container Registry and then publish the container
image to that registry; or, you can use the Azure portal to create the Azure Container Registry first. Then,
you can use the Azure CLI or the Azure portal to create a new container instance using the published
container image.
The following screenshot shows how to create a new container instance on Azure Container Instances in
the Azure portal.
The second step of the wizard, where the operating system platform, number of CPUs, memory
requirements, and public IP address and port settings are configured.
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated Time: 30 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD05_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD05_LAK.md.
Lesson 4
Implementing serverless services
The complexities of managing modern infrastructure are the result of numerous revolutions in the IT
industry. To a large extent, the popularity of cloud services like Microsoft Azure is owed to the difficulties
of deploying and managing a fleet of physical machines and scaling these to meet modern workloads.
Similarly, the shift to containerized services, described in the previous lesson, is the result of a long-term
trend for minimizing the footprint of a packaged service and increasing the deployment density. In a way,
there is a transition from deploying and scaling machines to deploying and scaling individual components
like micro-services. The next logical step is to deploy and scale functions.
This lesson covers Azure Functions, Microsoft’s hosted serverless computing offering, which allows you to
deploy functions at cloud scale. You will use Visual Studio 2017 to develop and test functions locally, and
subsequently deploy them to Azure Functions and configure various triggers.
Lesson Objectives
After completing this lesson, you will be able to:
• Explain how serverless computing abstracts away server deployment and management.
• Describe the types of problems that benefit from serverless computing.
• Describe how to deploy Azure Functions to Azure and monitor their execution.
• Describe how to use HTTP and other triggers to invoke Azure Functions.
Note: Clearly, “serverless” computing still requires servers to run functions. However,
because you do not manage these servers, and they are provisioned transparently to meet your
scale demands, it is as if they do not exist from your perspective; hence, “serverless.”
When creating a serverless application, you decompose your application logic into independent functions,
which can be invoked, scheduled, and monitored. With typical FaaS runtimes, Azure Functions included,
MCT USE ONLY. STUDENT USE PROHIBITED
5-32 Hosting Services
you only pay for actual execution time (often measured in CPU seconds), which means it is beneficial for
you to decompose your application’s logic into the smallest possible building blocks.
If function A is only invoked once per request, but function B is invoked 100 times, you don’t pay for the
compute and memory resources required for function A while function B is executing. This decomposition
is not only financially advantageous; it often helps to logically break your application into independent
parts, which are then easier to debug, update, and monitor in production.
You can write Azure Functions in a variety of programming languages, including C#, F#, and JavaScript. In
Azure Functions 2.0, there is experimental support for additional languages, including Java, Python, PHP,
PowerShell, and others. As a small unit of code, it is very likely that your function will require integrations
with external data sources and triggers, and the Azure Functions runtime provides a rich set of
integrations that includes Azure Cosmos DB, Azure Storage, Azure Service Bus, and more. For example, a
typical Azure Function might be invoked by a new message posted to a Service Bus queue, and it would
then process the posted data and write a new record into a Cosmos DB table. Another function might be
invoked by a new entry created in an Azure Storage table, and will respond by sending an HTTP request
to an external service.
• Pay only for what you use. With very accurate sub-second billing, you pay only for the periods of time
when your function is actually running and processing work. You don’t pay for servers or container
instances that wait idly for new work to arrive. One of the reasons sub-second billing is possible is that
functions are very fast to launch, so you can finish processing a new request in just a few seconds and
only get billed for these few seconds of processing time.
• Simplified software model. Serverless applications fit well with the micro-services model, where the
application is decomposed into the smallest independent building blocks. With serverless computing,
these building blocks are functions.
At the same time, serverless computing has some distinct disadvantages. It is not a silver bullet that fits
every application. In many cases, a hybrid architecture where parts of the system are delivered as
functions and other parts as more traditional software components is more appropriate. Some of the
problems with serverless computing include:
• Long-running applications. For long-running batch jobs, it may be more cost-effective to run the job
on a dedicated machine than to use a serverless runtime and pricing model.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-33
• Vendor lock-in. Unless you carefully use a FaaS abstraction, such as OpenFaaS, your serverless
architecture may be tied to a specific vendor, like Microsoft Azure or Google Cloud Platform.
Migrating to another vendor might require significant changes in your application logic and
deployment pipeline.
• Cold start. Because each function invocation is completely stand-alone, it might take extra time for
your function to handle a request because of JIT compilation or other startup costs, especially with
languages that were not designed for super-fast startup times, such as Java.
• Difficulty of local development. With some serverless runtimes, developing and testing the function
locally is difficult or even impossible. As a result, you need to develop test frameworks and simulator
tools for development purposes. (Fortunately, Azure Functions offers first-class local development and
debugging support in Visual Studio 2017, and even remote debugging for functions executing in
Azure.)
As with any new technology, you need to consider the benefits and challenges of serverless computing
and see if it is a good fit for your application. Serverless computing is often a great fit for modern
distributed architectures that consist of a large number of small, independent web services, which interact
with other cloud-hosted resources, such as databases and queues.
• App Service plan. Resources are allocated statically, as with Azure Web Apps and other App Service
resources. You configure the number of instances and the size of the instances running your
functions, and pay the same regardless of how many function invocations are actually executed.
MCT USE ONLY. STUDENT USE PROHIBITED
5-34 Hosting Services
The following screenshot shows configuration settings in the Azure portal for creating a new Azure
Function App. You can configure the resource group, hosting plan (consumption or App Service), location,
and other parameters before creating the Function App.
The configuration dialog box for creating a new function within an Azure Function App. You can select
the trigger type that invokes the new function. Use “HTTP trigger” for now.
The function accepts an HTTP request, retrieves the source and destination query parameters, and
returns a simple text HTTP response.
return req.CreateResponse(HttpStatusCode.OK,
$"Booked from {source} to {destination}");
}
The preceding code example shows how a function can respond to external HTTP requests. It can now be
accessed from a browser by using its associated URL, which brings about the issue of authentication.
HTTP-triggered functions support three forms of authentication to control who can access the function:
• Function key. A per-function API key needs to be attached to each incoming request.
• Host key. A global admin API key needs to be attached to each incoming request.
MCT USE ONLY. STUDENT USE PROHIBITED
5-36 Hosting Services
• User authentication. You can configure login with various identity providers (such as Facebook or
Google), or Azure Active Directory.
To learn more about the Azure Functions C# programming model, refer to the following link:
https://aka.ms/moc-20487D-m5-pg21
To learn about configuring user authentication with Azure App Service, refer to the following
link:
https://aka.ms/moc-20487D-m5-pg22
You can also use the Azure portal to monitor your function’s execution, read its log output, navigate its
local file system, and perform additional management and diagnostic tasks, which are outside the scope
of this module. A very useful feature is that you can test your function in the browser, without having to
actually force its trigger condition.
The following screenshot shows the Test pane in the Azure Functions portal, which helps test an HTTP-
triggered function without worrying about authentication and properly formatting the parameters.
For the settings in the Azure Functions management portal, refer to the following link:
https://aka.ms/moc-20487D-m5-pg23
Demonstration Steps
You will find the steps in the “Demonstration: HTTP-triggered Azure Function“ section on the following
page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD05_DEMO.md.
• Service Bus queue. This trigger will invoke your function when a new message arrives in a Service Bus
queue.
• Cosmos DB. This trigger will invoke your function when insertions or updates occur in a partition that
you’re interested in monitoring.
• Event Hub. This trigger will invoke your function when a new event is inserted into the event hub on
which you’re listening.
In addition to triggers, bindings help Azure Functions connect to data that is stored in remote services and
output data to remote services. For example, by using the Table Storage binding, your function can
automatically read an entity from a Table Storage, and by using the Sendgrid binding, your function can
automatically send an outgoing email message.
When you create your function in the Azure portal and configure its triggers and bindings, the
configuration is stored in a file named function.json. You can inspect this file to review and update the
triggers and bindings if necessary.
MCT USE ONLY. STUDENT USE PROHIBITED
5-38 Hosting Services
The following screenshot shows the configuration dialog box for creating a new function triggered by a
new message inserted to a Queue Storage.
The following code example shows the function.json file for a function configured with a Storage queue
trigger, a Storage queue message input binding, and a Table Storage output binding.
In the previous example, the function is triggered by a new Storage queue message in a queue named
hotel-reservations. The function parameter corresponding to that message is called reservationId.
Additionally, the function has an input trigger referencing an entity in a Storage table named
paymentdetails, which is assigned to a parameter named paymentDetails. The {reservationId}
reference indicates how the table entity should be retrieved based on the queue message contents.
Finally, the function returns an entity, which is then written into a Table Storage named
bookedreservations.
When you configure bindings in the Azure portal by using the function.json file, you can access them as
function parameters (for input bindings) or return values (for output bindings) of your function. The
parameter and return value types depend on the type of the binding. For example, for the Table Storage
nput and output bindings, you only need a class that has RowKey and PartitionKey properties
corresponding to the table keys.
Corresponds to the function.json trigger and binding definition shown in the following example.
For a full list of triggers supported by Azure Functions and how they can be integrated in
your function using bindings, refer to: https://aka.ms/moc-20487D-m5-pg24
A screenshot of the Visual Studio new project wizard when creating an Azure Functions project.
For your convenience, when developing the function locally, instead of using the function.json file, you
can use C# attributes to specify triggers and bindings for your function. To name a few examples, you can
use the [QueueTrigger] attribute to specify that the runtime should invoke your function when a new
message is inserted to a Storage queue; you can use the [FunctionName] attribute to specify your
function’s name; and you can use the [Blob] attribute to specify that the value stored in an output
parameter of your function should be written to an Azure Storage Blob.
For using trigger and bindings attributes in an Azure Functions project, refer to the following
link.
https://aka.ms/moc-20487D-m5-pg25
C# function that will be invoked based on an HTTP trigger. It is a trimmed version of the code generated
by the Visual Studio wizard.
When you launch the Azure Functions project from Visual Studio 2017 (under the debugger or directly), it
is not deployed to the cloud, but rather instantiated locally in the Azure Functions console host. For HTTP-
triggered functions, you can then issue HTTP requests locally; for other types of triggers, there are
different approaches that can be used to simulate function inputs and outputs.
The following screenshot shows the Azure Functions console host processing HTTP requests locally.
MCT USE ONLY. STUDENT USE PROHIBITED
5-42 Hosting Services
For the Azure Functions local development workflow using Visual Studio 2017, refer to the
following link: https://aka.ms/moc-20487D-m5-pg26
When your Function App is ready for deployment, you can use the Visual Studio Publish wizard to create
a new Azure Function App or deploy into an existing one. You should not mix and match Visual Studio-
generated functions and manually authored functions in the Azure portal within the same Function App.
The following screenshot Shows the publishing progress for a Visual Studio Azure Functions project into
an Azure App Service Function App:
Demonstration Steps
You will find the steps in the “Demonstration: Developing, Testing, and Publishing an Azure Function from
CLI “ section on the following page: https://github.com/MicrosoftLearning/20487D-Developing-
Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD05_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
5-44 Hosting Services
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated Time: 20 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD05_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD05_LAK.md.
Review Question
Question: What would you use to host a personal blog site in Azure, and why?
MCT USE ONLY. STUDENT USE PROHIBITED
5-46 Hosting Services
MCT USE ONLY. STUDENT USE PROHIBITED
6-1
Module 6
Deploying and Managing Services
Contents:
Module Overview 6-1
Module Overview
You will learn how to deploy services to both on-premises and cloud environments. You will also learn
how to manage the interface and policy for their services.
Objectives
After completing this module, you will be able to:
• Explain how to define service interfaces by using API Management and Swagger.
• Explain how to define policies by using API Management.
MCT USE ONLY. STUDENT USE PROHIBITED
6-2 Deploying and Managing Services
Lesson 1
Web Deployment with Visual Studio 2017
One of the quickest ways to deploy a web application to a remote server is to deploy it with the Web
Deployment Framework, or Web Deploy. With Web Deploy, you can perform several tasks at one time,
such as copying files to remote servers, configuring IIS application pools, and applying permissions to the
file system. There are many ways to use Web Deploy, but one of the easier ways is by using the publishing
feature of Visual Studio 2017.
In this lesson, you will learn about Web Deploy and how to deploy web applications by using Web Deploy
in Visual Studio 2017.
Lesson Objectives
After completing this lesson, you will be able to:
• Explain how to create a Web Deploy package and perform live deployment with Visual Studio 2017.
This is where Web Deploy, which was released in 2009, is most useful. Web Deploy was created to simplify
the deployment of web applications to servers. Web Deploy can perform more than just copying files
between a source and a destination. It can perform additional tasks such as copying the configuration
from one IIS to another, writing to the registry, setting file system permissions, performing transformation
of configuration files, and deploying databases.
Web Deploy is installed with Visual Studio 2017. If you have a computer that does not have Visual Studio
2017 installed on it, and you want to use Web Deploy, you will have to install it manually.
You can use Web Deploy to publish and synchronize an existing web application on a remote
server. You can also use Web Deploy to create a deployment package from an existing web
application and publish that package to a server later. A deployment package, which is a
standard compressed file, contains both the content that you want to copy to a server and an
instruction file that contains the list of actions to perform on the target server. The instructions, or
providers, as they are referred to in the Web Deploy terminology, control the various resources
that can be created or manipulated in the server, such as files, IIS applications, databases, and
registry. You can also create your own custom Web Deploy provider if you have to perform a task
that is not implemented by any of the existing providers, such as attaching a .VHD file as a local
hard drive.
For a list of available Web Deploy providers, refer to the following link.
Web Deploy Providers
http://go.microsoft.com/fwlink/?LinkID=298821&clcid=0x409
You can use Web Deploy in various ways. For example, when you use Visual Studio 2017 to publish a web
application, you are actually using the Web Deployment Framework for the task. The same is true when
you export an application from IIS Manager, or when you use the MSDeploy command-line tool.
For more information about Web Deploy, refer:
Introduction to Web Deploy
http://go.microsoft.com/fwlink/?LinkID=298822&clcid=0x409
Whichever deployment technique you choose, you can control some basic settings through the properties
of the web application project. If you do not plan to use Web Deploy, you can control only a few settings,
such as deploying files that are in the project folder but are not included in the project. If you plan to use
Web Deploy (either live or by creating a package), you can configure more settings, such as copying local
IIS application pool settings to the deployed server and listing the SQL script files, which will run as part of
the deployment.
To view these settings, right-click your web application project in the Solution Explorer window in Visual
Studio 2017, and then click Publish.
MCT USE ONLY. STUDENT USE PROHIBITED
6-4 Deploying and Managing Services
On the Pick a publish target page, click the IIS, FTP etc tab, and then click Create Profile.
On the Settings tab of the Publish wizard, ensure that the settings are appropriate, and then click Save.
• Select which solution configuration you want to publish, such as debug or release.
If you select any of the Web Deploy techniques, you can also provide additional settings, such as a new
connection string that will replace the current connection string in the web.config file.
Visual Studio 2017 stores all the publish settings in the project so that the next time you have to publish
the application, you can do a one-click publish instead of supplying all the information again.
Visual Studio 2017 supports storing more than one publishing profile so that you can create profiles for
different scenarios. For example, you can create different profiles for testing and production
environments, each with its own database connection string.
For more information on how to use the Web Deploy dialog box, refer to the following link.
How to: Deploy a Web Project by Using One-Click Publish in Visual Studio 2017.
http://go.microsoft.com/fwlink/?LinkID=298825&clcid=0x409
Note: When you create a Web Deploy package, in addition to the packaged compressed
file, a .cmd file is created, together with a readme.txt file that describes how to run the .cmd file
to deploy the package.
MCT USE ONLY. STUDENT USE PROHIBITED
6-6 Deploying and Managing Services
Demonstration Steps
You will find the steps in the “Demonstration: Deploying a Web Application with Visual Studio“ section on
the following page. https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/blob/master/Instructions/20487D_MOD06_DEMO.md.
To install a package, copy the package to the host machine, and then run the following command at the
command prompt with the administrator privilege.
Lesson 2
Web Deployment on Linux
When Microsoft released the open source, cross-platform implementation of ASP.NET Core in late 2014,
one of the key strategic shifts was the newly-gained Linux support. For many organizations, the ability to
run Microsoft .NET applications (and especially ASP.NET web services) on Linux is a great consolidation
opportunity towards a single production operating system. The ability to use familiar development tools
on Windows and then host the resulting application on Linux as IT requirements dictate is important for
reducing costs.
This lesson covers the various options for publishing ASP.NET Core applications It explores how ASP.NET
Core applications are deployed to Linux hosts and use Docker containers to run a reverse proxy server in
front of the ASP.NET Core host.
Lesson Objectives
After completing this lesson, you will be able to:
• Explain how to publish ASP.NET Core applications that run on Linux.
• Describe reverse proxies and how they integrate with ASP.NET Core.
• Explain how to configure Nginx for ASP.NET Core.
• Explain how to deploy an ASP.NET Core web service with Nginx in Docker containers.
The open-source .NET Core is cross-platform and supports multiple operating systems as production
targets, including various versions of Windows (Windows Server, Windows Nano Server) and numerous
Linux distributions (Ubuntu, Debian, Red Hat Enterprise Linux, Alpine, and others). .NET Core also supports
numerous processor architectures, including Intel x86-64 (used by most servers today) and ARM (used by
IoT devices and mobile phones). Choosing the right operating system for all your organization’s
applications and services can be a major cost-saving factor, especially if you can consolidate your
production environments to a single operating system and processor architecture.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-9
Some of the reasons for choosing Linux as your operating system include:
• Much lower on-disk and memory footprint in bare metal, virtualized, and containerized deployments
(in some cases by more than a factor of 10).
• First-class support for other platforms and programming languages, such as Java, Python, PHP, and
Go. Windows might not support some of these languages.
When you choose Linux as your production platform, you can use the end-to-end build, publish, and
deploy workflow from your Visual Studio 2017 development environment.
When you prepare an ASP.NET Core application for production deployment, you build it with
optimizations enabled (in the Release mode), and you might choose to use a self-contained or
framework-dependent publishing mode. This choice affects the size of your final build and the size of
your container image and the image parts that can be shared with other images, if you’re deploying the
application in a container.
Framework-dependent publishing
When you use framework-dependent publishing, the final build output of your application contains the
application’s dynamic link libraries (DLLs) and all of its third-party dependencies, such as NuGet packages
and project references. However, .NET Core libraries and the .NET Core runtime, which includesjust-in-
time (JIT)compiler, garbage collector, and the dotnet tool, are not packaged with the application. The
shared installation of these components must be present on the target machine.
When you use containers for deployment, Microsoft provides official images on Docker Hub, which
contain the prerequisites for running a framework-dependent application. These are tagged with various
runtime tags. For example, microsoft/dotnet:2.1-runtime is a container image that contains the .NET
Core shared libraries and the components for running .NET Core 2.1 applications. There are also similar
versions of these images optimized for running ASP.NET Core services.
You can use the following commands to restore NuGet packages, compile your application, publish it as a
framework-dependent package, and then run it from the output directory.
Although you need to install .NET Core libraries and runtime components on the target machine,
framework-dependent publishing has a major advantage. The produced package is completely platform-
independent, and can be run without modification on any platform that supports .NET Core—regardless
of the operating system or processor architecture.
Self-contained publishing
When using self-contained publishing, the final build package of your application contains your
application’s DLLs, third-party dependencies, a complete copy of the .NET Core managed libraries, and
native components, such as the JIT compiler and the garbage collector. Because some of these
components are platform-dependent, when using self-contained publishing, you need to specify the
runtime identifier of a specific platform. The resulting package will run only on a specific operating system
and a processor architecture specified by the runtime identifier.
MCT USE ONLY. STUDENT USE PROHIBITED
6-10 Deploying and Managing Services
• linux-x64. This runtime identifier targets any Linux distribution for x86-64 processors, with the
exception of Alpine Linux. Examples of supported Linux distributions include Debian, Ubuntu Linux,
Red Hat Enterprise Linux, and Fedora.
• alpine.3.6-x64. This runtime identifier targets the Alpine Linux distribution for x86-64 processors.
Alpine Linux is a lightweight Linux distribution, which works well in container environments because
of its small size. For example, a “Hello, World” .NET Core application container image on top of Alpine
Linux can be as small as 54 MB in size.
• win-x64. This runtime identifier targets any version of Windows for x86-64 processors, including
Windows Server 2008 R2, Windows Server 2016, and others.win10-arm64. This runtime identifier
targets Windows 10 or Windows Server 2016 versions running on ARM 64-bit processors.
For a full list of runtime identifiers, refer to the .NET Core RID catalog.
https://docs.microsoft.com/en-us/dotnet/core/rid-catalog
You can run the following commands to restore NuGet packages, compile your application, publish it as a
self-contained application for Linux x86-64, and then run it from the output directory.
When you use containers for deployment, Microsoft provides official images on Docker Hub, which
contain the basic native dependencies required by a self-contained .NET Core application. These images
are tagged with various runtime-deps tags, that are much smaller in size than the corresponding
runtime tags. For example, microsoft/dotnet:2.1-runtime-deps-alpine is a container image that
contains only the Alpine Linux base image and the native dependencies, such as libzlib and libcurl,
required by .NET Core on Alpine Linux. There are also similar versions of these images optimized for
running ASP.NET Core services as opposed to, for example, .NET Core console applications.
For reference, these are the sample sizes of a “Hello, World” .NET Core 2.1 application packaged into a
container image.
When using self-contained publishing, you should also consider using the intermediate language (IL)
linker NuGet package, which removes modules and types that your application does not require. In some
cases, using the IL linker can reduce package sizes by half or more. IL linker works quite well for many
applications. However, if an application uses reflection extensively, some dependent assemblies might be
removed by the IL linker. You can control this behavior by providing the IL linker with a special
configuration file.
For instructions on using the IL linker, refer to the following link
https://aka.ms/moc-20487D-m6-pg1
• Runtime sharing. If you deploy multiple .NET Core applications to the same machine or run multiple
containers based on the .NET Core container images, then by using framework-dependent publishing,
you can get all the application instances to share the same .NET Core runtime files and assemblies on
the disk. Furthermore, when these runtime files and assemblies are loaded into memory, they are
shared by using the operating system’s library loader, to avoid duplication across processes. On the
other hand, when using self-contained publishing, each application gets its own copy of the .NET
Core runtime files and assemblies on the disk. When you run multiple applications, these files and
assemblies are not shared in memory because they are supported by different files. This produces a
bigger disk and memory footprint.
• Platform flexibility. When using self-contained publishing, you must choose a target platform on
which your application will run. You can’t build the application for Linux operating systems on a 64-
bit Intel processor and then run it on a Windows operating system or an operating system with an
ARM-based processor. On the other hand, when using framework-dependent publishing, the
resulting build can be run by using the dotnet helper executable on any platform where the
appropriate version of .NET Core is installed.
• Minimal dependencies. When using self-contained publishing, you minimize the runtime
dependencies required for hosting your application. In fact, only a handful of native dependencies
need to be installed on the target system, such as libcurl. If your applications run in constrained
environments, or if you distribute your application to be run by others, minimizing dependencies can
be an important advantage.
• Control over the .NET Core version and servicing. When using self-contained publishing, you control
the exact version of .NET Core that will be used to run your application. There’s no risk of servicing
upgrades to the host machine breaking your deployment. On the other hand, you will not benefit
from any security or bug fixes that are deployed to the host machine’s .NET Core installation, which
you would benefit from when using framework-dependent publishing.
MCT USE ONLY. STUDENT USE PROHIBITED
6-12 Deploying and Managing Services
• Content caching. A reverse proxy can cache commonly retrieved resources (especially static content)
and return them to clients without making a request to the server.
• Load balancing. A reverse proxy can distribute incoming requests to a pool of several back-end
servers by either using simple load balancing rules, such as round robin, or by inspecting the HTTP
requests, URLs, and headers to determine which server should service the request.
• Web application firewall. A reverse proxy can detect and mitigate common attacks on web
applications.
• SSL termination. A reverse proxy can terminate the HTTPS requests from clients. The computing
resources required for SSL encryption are then offloaded from the web server to the reverse proxy
server.
Microsoft recommends hosting ASP.NET applications and services by using the Kestrel web server behind
a reverse proxy that forwards requests to the Kestrel web server. In addition, you must use a reverse proxy
with the Kestrel web server if you want to run multiple ASP.NET Core applications that share the same IP
and port on a single server. In this scenario, the Kestrel web server doesn’t support sharing the same IP
and port across multiple processes, which means that clients will have to use multiple ports. A reverse
proxy can look at the incoming request and route it to the appropriate Kestrel web server process, which
listens to the request on its unique IP and port combination.
Common reverse proxy software includes Nginx, Apache HTTP Server, Squid, YXORP, and IIS. Most of
these software are open source and are available under a permissive license for use in your own
environment. This lesson uses Nginx, a popular open-source web server, which can operate as a reverse
proxy to an ASP.NET Core application.
For more considerations related to hosting and deploying ASP.NET Core applications,
including reverse proxies, refer to the following link.
https://aka.ms/moc-20487D-m6-pg2
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-13
In the preceding example, the listen directive specifies that the Nginx process should accept HTTP
requests over port 80; the proxy_pass directive specifies the address on which the ASP.NET Core
application process is listening; and the proxy_set_header directives include headers that can be used by
the web server; for example to determine the client’s real IP address.
For an example of using two Docker containers to run an ASP.NET Core application and an
Nginx reverse proxy, refer to the following link.
https://aka.ms/moc-20487D-m6-pg3
Note: When you use the ASP.NET Core authentication middleware, you need to use the
UseForwardedHeaders method to forward the X-Forwarded-For and X-Forwarded-Proto
headers. The ForwardedHeaders middleware needs to run before the authentication
middleware. For example, this middleware updates the Request.Scheme property with the value
from the X-Forwarded-Proto header, which might be https://, although the actual request from
the reverse proxy to the web server was performed by using a plain HTTP connection.
For more information about the Forwarded Headers middleware, refer to the following link.
https://aka.ms/moc-20487D-m6-pg4
MCT USE ONLY. STUDENT USE PROHIBITED
6-14 Deploying and Managing Services
Demonstration Steps
You will find the steps in the “Demonstration: Deploying an ASP.NET Core Web Service with Nginx“
section on the following page. https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-
Azure-and-Web-Services/blob/master/Instructions/20487D_MOD06_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-15
Objectives
After completing this lab, you will be able to:
• Deploy an ASP.NET Core Web API service to a Linux Nginx web server.
Lab Setup
Estimated Time: 30 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD06_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD06_LAK.md.
Lesson 3
Continuous Delivery with Visual Studio Team Services
In the previous lessons, you learned how to use web deployment techniques to deploy your application
both on-premises and to Azure. However, there are some questions you might want to answer before you
start using the deployment techniques:
• Will you deploy to your source control after each check-in or on demand?
• Will you deploy only after the code passes unit tests?
• Will you deploy every couple of days, or deploy nightly to have an up-to-date testing environment
the following day?
• Will you manually build, test, and deploy the application every time or use automated, scheduled
tasks?
Continuous delivery is a software development approach that answers some of these questions, if not all.
If used correctly, it can help you increase the quality of your application.
In this lesson, you will learn the benefits of using continuous delivery and how to use continuous delivery
with Azure and with source control management systems, such as Git and Team Foundation Server (TFS).
Lesson Objectives
After completing this lesson, you will be able to:
• Describe the benefits of continuous delivery.
• Describe how to configure a continuous integration (CI) build with Visual Studio Team Services.
• Explain how to configure a continuous delivery pipeline with Visual Studio Team Services.
• Increase the confidence of your development teams through the need to maintain a high-quality
product constantly.
• Reduce the overall risk of developing a complex software product by using automated tools.
The following screenshot shows the Select a source page on the Visual Studio Team Services project
website.
5. Publish Artifacts. Save the build output in Visual Studio Team Services.
FIGURE 6.11: THE RELEASE TAB IN THE BUILD AND RELEASE HUB.
Select the Azure App Service Deployment template and then click Apply.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-21
Move to the Tasks tab, select your Azure subscriptions by using the Azure subscription drop-down
arrow, and then select the App service name by using the App service name drop-down arrow to deploy
your application.
The following screenshot shows the input parameters on the Tasks tab required to deploy an application.
Demonstration Steps
You will find the steps in the “Demonstration: Continuous Delivery to Websites with Git and Visual Studio
Team Services“ section on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD06_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-23
Lesson 4
Deploying Applications to staging and Production
Environments
By now, you have learned how to use Web Deploy and continuous delivery to automate the deployment
process of your application, but there is more to deployment than just making sure the target server has
the same version of the new application. For example, when you deploy more than one web application
to a web server, there are steps you can take to improve the way these applications run side-by-side. In
addition, when you deploy a new application to an existing environment, especially to production
environments, you have to consider how the deployment process itself will affect users that are currently
trying to use your application. Will the application still be able to respond to requests while being
updated? Will its throughput be affected when servers are down for deployment?
In this lesson, you will learn about additional tools and techniques that can assist you in deploying
applications to staging and production environments.
Lesson Objectives
After completing this lesson, you will be able to:
• Explain the benefits of deploying your application to the staging environment.
• Explain how to deploy your applications to the staging and production environments in Azure.
• Describe the deployment strategies for each application.
• Describe how to use deployment slots with Azure Web Apps.
• Describe the advantage of configuring your applications in cloud by using the application settings.
In the next topic, you will be introduced to the Azure App Services slots that enable you to create and
delete production-like environments quickly.
• Validate your application in the production-like environment. This is an important step to check if
everything is configured correctly and it works in the Azure environment.
MCT USE ONLY. STUDENT USE PROHIBITED
6-24 Deploying and Managing Services
• Warm application before it goes to production. Services usually uses caches and connections to
databases, hence warming the services will make it more responsive and users will not be affected by
the new version.
• No request dropping. Requests are initiated and completed in the production environment before
the version upgrade takes place.
• No downtime between the switch of versions. Switching between production and staging with new
version is fast, users will not be affected by the new version.
• Ability to switch back quickly to the previous working version. Even after checking the application in
testing and staging environments, there still can be production-related problems for scenarios that
can't be reproduced in these environments. Therefore, retaining the previous production version
enables you to switch back quickly to the previous working version.
Another thing that you can use the staging environments for is to swap VIP addresses. With swap VIP
addresses update, the virtual IP and Domain Name System (DNS) address of your staging and production
environments are swapped. This results in your production environment having the address and VIP of the
staging environment and vice-versa.
By creating a staging environment that has the same hardware and software configuration as your
production environment, you can use the swap VIP addresses update to upgrade your production
environment quickly without experiencing the downtime of upgrade domains.
Note: If you have a single instance in your production environment in Azure, performing
an in-place upgrade disables the instance during the upgrade. Using multiple instances, which is
the recommendation for production environment to achieve 99.95% availability, provides the
required availability of your service, but reduces the throughput of the service because of the
downtime of instances in the domain being upgraded.
1. Deploy the upgraded web application to the staging environment. Use the same virtual machine size
and number of instances as you use for your production environment.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-25
2. Verify that your application works correctly in the staging environment. You might have to change
the service URL you are using in the client application to point to the staging environment instead of
the production environment.
Note: A swap VIP addresses update requires having both production and staging
environments deployed. If you only have the staging environment deployed, you will not be able
to use the swap VIP addresses update option.
3. In the Microsoft Azure portal, click App Services, select the service deployment, and then on the
Overview blade, click Swap. In the Swap dialog box, choose the Source and Target slots, and then
click OK.
Note: After you complete the swap VIP addresses update, and no longer require the
staging environment instances, delete the staging deployment to conserve CPU hours.
For more information about staging and production environments, refer to the following link
For more information about deploying to Azure, refer to the following link.
https://aka.ms/moc-20487D-m6-pg7
Deployment Strategies
Deployment needs to be planned for all versions
of each application. This topic discusses about the
various deployment considerations.
Downtime.
When deploying applications to the production
environment, zero downtime is crucial for user
experience. Therefore, planning a deployment for
a new version involving database schema changes,
and any change that can affect the current version
in production, is necessary. Simulating the
deployment process in a separate environment
can help find issues that weren’t considered
earlier.
Multi-phase swap.
Even after simulating the deployment in the testing and staging environments, errors can occur in the
production environment. For that same reason, you want to be able to roll back quickly to the last
MCT USE ONLY. STUDENT USE PROHIBITED
6-26 Deploying and Managing Services
working version. Multi-phase swap ensures that as long as you don’t validate the new version, the last
version remains unchanged for the rollback option. After the approval of the new version, the previous
version is free to be used in the next staging version.
Auto-swap.
Deploying a large change to the application is a very complicated task, and it's hard to test. Therefore,
deploying a lot of small changes is a lot easier for development and testing. To make the process easier,
you need a fast, automatic, smooth, and safe deployment process. The Azure App Services offer an auto
swap feature that allows automatic deployment of the application to the production environment after
any changes that were deployed to the staging environment.
Demonstration Steps
You will find the steps in the “Demonstration: Using Deployment Slots with Web Apps“ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD06_DEMO.md.
For more information about Azure App Services configuration, refer to the following link.
https://aka.ms/moc-20487D-m6-pg8
When swapping between slots in the Azure App Service, Azure automatically swaps the settings between
the slots. There is an option to make a specific setting to stick with the slot at the time of swapping.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-27
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated Time: 20 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD06_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD06_LAK.md.
Lesson 5
Defining Service Interfaces with API Management
We first discussed the OpenAPI specifications in module 3, “Creating and Consuming ASP.NET Core Web
APIs”, lesson 5, “Automatically Generating HTTP Requests and Responses”. The idea of defining an
interface or a contract to your HTTP services in a vendor-agnostic language with OpenAPI is extremely
valuable to creating interconnected applications like in microservices architecture. One of the advantages
of a well-defined API specification is the ability to provide additional layers on top of APIs, providing
documentation endpoints, error handling, security policies, throttling (quotas), and other services.
API Management is a hosted platform that provides numerous services on top of APIs that you host
yourself, or APIs hosted on other Azure services (including Azure App Service). In this lesson, you will learn
how to use the API Management and OpenAPI to provide robust, secure, and reliable APIs to your
customers.
Lesson Objectives
After completing this lesson, you will be able to:
• Describe how to configure API policies for throttling, security, and other configurations.
• Limiting call rates using API Management.
• Authentication. API Management can verify API keys for using your APIs, and also other credentials
such as client certificates.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-29
• Caching. API Management can cache API responses if so configured (e.g. for rarely-changing GET
requests) and return them to clients without consulting your back-end services.
• Analytics. API Management logs the API calls performed to your service so that you can analyze its
performance and behavior.
• Quotas. API Management can enforce usage quotas and rate limits that you specify, and return the
appropriate errors to clients without putting additional load on your back-end services.
• Transformations. API Management can transform requests and responses on the fly, which is very
useful when multiple versions of your API must continue to be accessible to clients.
Although you can use the API Management portal to create API operations manually, API Management
supports (and uses internally) the OpenAPI specification. As a result, if you already have a well-defined
OpenAPI specification that describes your service (as you should), it will be very easy to get started with
API Management by importing that specification and then defining your API Management configuration
on top of it.
• API Management instance. The API Management instance is a hosted, scalable endpoint that receives
HTTP requests from clients and forwards them as necessary to your API back ends.
• API Management publisher portal. The API Management publisher portal is a hosted web portal that
you use to manage the API Management instance—create new API methods, configure
authentication, throttling, and other features. In recent releases, a lot of functionality from the stand-
alone API Management publisher portal was introduced directly from the main Azure portal, under
the API Management service blades.
• API Management developer portal. The API Management developer portal is a hosted web portal
that your clients (developers) use to read the documentation on your API, try it out, and get the API
keys for accessing your API.
Note: The preceding API is not a good example of REST API design best practices. It is
provided only as an illustration that highlights the hierarchy of API Management concepts.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-31
The following screenshot illustrates the first step in creating an API Management instance. You need to
specify the service name, location, organization name, the pricing tier, and other details.
For more information on creating an OpenAPI definition for Azure Function Apps, check the
following link.
https://aka.ms/moc-20487D-m6-pg9
When creating a new API, you specify the back-end service address for that API, and then add at least one
operation. The operation has an HTTP method (e.g. GET, POST), and can have a constant or a variable
(parameterized) URL. For example, the URL /flights/{flight} is a parameterized URL where {flight} will be
replaced with a flight identifier, such as “BY005”.
MCT USE ONLY. STUDENT USE PROHIBITED
6-32 Deploying and Managing Services
The following screenshot illustrates the process of testing an API operation by invoking it from the API
Management portal.
FIGURE 6.18: A SCREENSHOT OF THE TEST TAB IN THE API MANAGEMENT PORTAL.
The following screenshot illustrates the HTTP request and response as shown in the API Management
portal.
The following screenshot illustrates the process of creating a new API product.
For more information on creating and importing APIs, see the API Management
documentation:
https://aka.ms/moc-20487D-m6-pg10
After publishing your API, third parties can use the API Management developer portal to browse your
available APIs and API products, test them from the browser, subscribe to get API keys, and interact with
them. There are even automatically-generated code samples in various languages (C#, Java, Python, and
others) for interacting with your API. The API Management developer portal is a standalone web
application, which is hosted by your API Management instance.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-35
Demonstration Steps
You will find the steps in the “Demonstration: Importing and Testing an OpenAPI Specification“ section on
the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/blob/master/Instructions/20487D_MOD06_DEMO.md.
• Limit call rate. Restricts API usage to a specific number of calls per interval.
MCT USE ONLY. STUDENT USE PROHIBITED
6-36 Deploying and Managing Services
• Cache. Stores a cached response and returns it to the subsequent callers when appropriate.
• Rewrite URL. Converts a URL from its public form to what the backend expects.
• Find and replace string in body. Modifies the request body by performing a string
replace operation.
For a complete list of policies and what they can be used for, refer:
https://aka.ms/moc-20487D-m6-pg11
To specify a policy, you provide the appropriate policy XML definition in the API Management publisher
portal. There is also a simplified form-based editor for common policy definition tasks, such as adding or
removing headers and caching responses.
The following screenshot shows the XML policy editor at the operation scope.
In the preceding code example, the rate-limit-by-key policy was placed in the inbound policy section,
which means it will be evaluated prior to calling the service backend. The key specifies a renewal period of
60 seconds, and a limit of two calls per 60 seconds per subscription.
https://aka.ms/moc-20487D-m6-pg12
The following snippet is the HTTP response (including headers) when the rate limit for a service operation
has been exceeded.
{
"statusCode": 429,
"message": "Rate limit is exceeded. Try again in 51 seconds."
}
Demonstration Steps
You will find the steps in the “Demonstration: Limiting Call Rates Using API Management“ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD06_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
6-38 Deploying and Managing Services
Objectives
After you complete this lab, you will be able to:
Lab Setup
Estimated Time: 30 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD06_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD06_LAK.md.
Best Practices
• If you are developing a service that is hosted under IIS, incorporate Web Deploy into your
deployment process.
• Use MSDeploy or the Web Deploy PowerShell snap-in when you deploy web applications through
scripts, instead of using tools such as XCOPY.
• Check whether your SCM supports automated build and use them. If it does not provide automated
builds, evaluate external third-party automated build tools, or consider switching to an SCM system
that does have automated builds.
• Deploy to the staging environment in Azure before you deploy an updated version to your
production environment.
Review Question
Question: What are the tools that use the Web Deployment Framework?
Tools
• Visual Studio 2017
• IIS
• Web Deploy
• Windows PowerShell
• Microsoft Azure
Module 7
Implementing Data Storage in Azure
Contents:
Module Overview 7-1
Module Overview
Storage services is an important concept in cloud computing. Due to the volatile nature of cloud
computing, a single source of truth is needed to maintain consistency of application data and static
resources. For this reason, most (if not all) cloud platforms have a storage solution providing a persistence
store in the cloud.
• Microsoft Azure Files share. This provides a distributed file system that can be accessed via the
Server Message Block protocol from Windows and UNIX-like operating systems.
• Microsoft Azure SQL Database. This provides a fully featured relational store.
• Microsoft Azure Cosmos DB. This provides a fully featured NoSQL solution like key-value,
document, columnar and graph data stores.
• Microsoft Azure Cache for Redis. This provides a key/value store for fast access.
You can access all these storage services through the various client SDKs or directly by using their HTTP-
based APIs. Microsoft Azure Storage provides an out-of-the-box solution for common data storage
challenges such as securing and transferring a large amount of data.
Note: The Microsoft Azure portal UI and Azure dialog boxes in Visual Studio 2017 are
updated frequently when new Azure components and SDKs for .NET are released. Therefore, it is
MCT USE ONLY. STUDENT USE PROHIBITED
7-2 Implementing Data Storage in Azure
possible that you will notice some differences between the screenshots and steps shown in this
module and the actual UI you work within the Azure portal and Visual Studio 2017.
Objectives
After completing this module, you will be able to:
Lesson 1
Choosing a Data Storage Mechanism
Modern applications store and manipulate many types of data from files to data structures. Choosing the
right storage for each data type is a key issue in modern application development. This lesson guides you
on how to choose the right storage for each type of data.
Lesson Objectives
After completing this lesson, you will be able to:
• Describe the difference between SQL Database and Azure Cosmos DB.
• Describe the difference between Azure Cache for Redis and Content Delivery Network.
Comparison of Storage
Storage has different capabilities. The following table illustrates some of its attributes:
Blob storage Files HTTP-based APIs with storage 4.75 TB per block blob, 8 TB
client abstraction and per page blob
Windows file I/O API
Azure Files Files Operating system file I/O APIs 1 TB per file, 5 TB per file
or HTTP-based API share
By comparing the options in Blob storage and Azure Files, you can see that Storage offers a great deal of
flexibility with regards to sizes and access mechanisms.
Blob storage and Azure Files have built-in synchronous replication to other machines within the same
Azure datacenter.
MCT USE ONLY. STUDENT USE PROHIBITED
7-4 Implementing Data Storage in Azure
Blob storage offers a geo-replication feature also, which copies data to a second data center in the same
region (North America, Europe, or Asia). This option is enabled by default and offers better protection in
case an entire data center goes offline.
Choosing the right solution depends on the type of application and how the application works with the
data in the cloud. When choosing a solution, you need to take the following into consideration:
• Size of data
• Potential cost
• Key-value
• Document
• Column
• Graph
Key-value
Key-value stores are designed to store simple data in a scalable manner. You can use them to store a large
set of structured entities at a low cost and issue simple queries to retrieve entities when required.
Key-value stores were designed for linear scale and enforces no schema to the entities stored in the table.
This means you can store different types of entities in the same table.
Key-value stores do not provide any way to represent relationships between entities and thus does not
support the join operation.
Document
Document stores are designed to store semi-structured data called documents. Documents can be JSON,
XML or YAML files. Like key-value stores, documents are represented by a key, they are designed for linear
scale, and there is no strict schema for the document. The difference is that a document can be retrieved
by their content and not only by the key.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-5
Column
Columnar stores are similar to relational databases in that they store the data in columns and rows, and by
defining the columns first. However, column stores organize the data differently such that the main order
is by column and not by row. This order makes the aggregation of the data by column super fast.
Graph
Graph stores are designed to store data in a graph structure by using nodes and edges, and the query
traverses the graph. This kind of database is best suited for finding patterns in data or to query data with a
lot of connections. Although a graph database is a NoSQL database, scaling for graph is hard but there
are some databases that offer some level of scaling.
• Data Model. Azure SQL has the traditional relational model and is best for strict schema data. But if
the data for your application needs more flexibility, Azure Cosmos DB offers four kinds of NoSQL
databases. Choosing the right model for each task can speed up development.
• Availability. The availability, configuration, and maintenance of both require minimal configuration
and administration.
• Scalability. Both can be scaled but the main difference is that Azure SQL scales well for read
operations while Azure Cosmos DB scales well for read and write operations.
Data distribution and caching with Azure Cache for Redis and Content
Delivery Network
Our applications manipulate a lot of data, Storing
the data close to its point of usage is important
for reducing the data transfer time in the network.
There is data that needs to be available to the
application and data that needs to be available to
the users, for this reason, Azure offers two
services.
Lesson 2
Accessing Data in Azure Storage
Storage introduces the Blob storage for storing files in a scalable and durable manner.
In this lesson, you will explore the Blob storage features and learn how to use them.
Lesson Objectives
After completing this lesson, you will be able to:
• Storing data for background analysis, either by Azure hosted services or by the on-premises
application.
• Replacing existing applications’ use of file systems.
This is not a closed list and there are many more scenarios that can benefit from the use of blobs.
However, having so many objects requires some type of organization.
Blob storages are also used extensively throughout the Azure. For example, the Azure deployment
mechanism saves the deployment packages to Blob storage. These packages are also used by the
autoscaling mechanism. Diagnostics logs are also saved to cloud storage and Azure virtual machines disks
are persisted to Blob storage as well.
MCT USE ONLY. STUDENT USE PROHIBITED
7-8 Implementing Data Storage in Azure
• Storage account. Storage accounts are the root entities of the Blob storage. Every access to Storage
must be done through a Storage account.
• Container. Containers are the sub-entities of the Storage accounts. Each container can contain blobs.
An account can contain an unlimited number of containers. A container can store an unlimited
number of blobs.
• Blob. Blobs are the leaf of the hierarchy and represent a file of any type. There are two types of blobs:
block blobs or page blobs. The differences between block blobs and page blobs are covered later in
this lesson.
Note: The Azure SDK contains a class called CloudBlobDirectory, however, directories are
not part of the hierarchy and simply represent substrings of the blob’s name separated by /.
Each blob can be addressed, by using the schema at the following URL:
http://<storage account>.blob.core.windows.net/<container>/<blob>.
There are two types of blobs targeted for different workloads: block blobs and page blobs.
Block blobs
Block blobs are designed for streaming workloads where the entire blob is uploaded or downloaded as a
stream of blocks. The maximum size for a block blob is 4.75 TB, and it can include up to 50,000 blocks.
Splitting the blob into a collection of blocks allows you to upload a large blob efficiently by using a
number of threads that execute the upload tasks in parallel. Each block is identified by a BlockID and can
vary in size up to a maximum of 100 MB. To upload a block blob, you must first upload a collection of
blocks and then commit them by their BlockID.
Block blobs simplify large file upload over the network by introducing the following features:
• Parallel upload of multiple blocks to reduce communication time
The following code shows how to split a file into blocks and upload them to a block blob
var fs = System.IO.File.OpenRead("MyFile.txt");
byte[] data = new byte[100];
int id = 0;
while (fs.Read(data, 0, 100) != 0)
{
using (var stream = new System.IO.MemoryStream(data))
{
string blockID =
Convert.ToBase64String(Encoding.UTF8.GetBytes((id++).ToString()));
// Upload a block
await blob.PutBlockAsync(blockID, stream, null);
blockList.Add(blockID);
}
}
The following code shows how to upload a large file to a block blob by using multiple threads
storageClient.DefaultRequestOptions.ParallelOperationThreadCount = 10;
storageClient.DefaultRequestOptions.SingleBlobUploadThresholdInBytes = 64000;
CloudBlockBlob blob = container.GetBlockBlobReference(fileName);
await blob.UploadFromFileAsync(Path.Combine(path, fileName));
Page blobs
Page blobs are designed for random-access workloads in which clients execute random read and write
operations in different parts of the blob. Page blobs can be treated much like an array of bytes structured
as a collection of 512-byte pages. Handling a page blob is similar to handling a byte array:
MCT USE ONLY. STUDENT USE PROHIBITED
7-10 Implementing Data Storage in Azure
• Read and write operations are executed by specifying an offset and a range (that align to 512-byte
page boundaries)
Unlike block blobs, page blobs do not introduce a separate commit phase, so writes to page blobs
happen in-place and are immediately committed to the blob.
Reading data from page blobs can be done by using the OpenReadAsync method that lets you stream
the full blob or a range of pages from any offset in the blob, or by using the GetPageRanges method for
getting an enumeration over PageRange objects.
The following code shows how to read from page blob by using OpenRead.
Unlike block blobs, page blobs are not continuous so when reading over pages without any data stored in
them, the blob service will return zeroes for those pages. You can use the GetPageRangesAsync method
to get a list of the ranges in the blob that contain valid data. You can then enumerate the list and
download the data from each page range.
The following code shows how to read from page blob by using GetPageRangesAsync.
Using GetPageRangesAsync
CloudBlobClient blobClient = new CloudBlobClient(new Uri(""));
There are three policies built into the Storage client library:
• RetryPolicies.NoRetry. No retry is executed.
• RetryPolicies.LinearRetry. Retries N number of times with the same back-off interval between each
attempt.
Not all exceptions will cause the Storage client to initiate a retry. Exceptions are classified as retryable or
non-retryable. For example, all HTTP status codes greater than or equal to 400 and lesser than 500 are
non-retryable exception statuses, which imply an inability to process the client’s request by the service
due to the request itself. All other exceptions are retryable. For example, if a client-side timeout was
triggered then it makes sense to initiate a retry.
After retryable exceptions are caught, the Storage client library evaluates RetryPolicy and decides
whether to initiate a retry. The exception will be presented to the client only if RetryPolicy determines
that there is no need to retry the operation. For example, if RetryPolicy was configured to execute three
retry attempts, the exception is rethrown to the client only when the third attempt fails.
It is possible to construct custom retry policies and customize the retry algorithm to fit your specific
scenario. For example, you can set a retry algorithm per exception type. To implement a custom retry
policy, implement the IExtendedReplyPolicy interface, which determines whether to retry a specific
operation and interval until the next retry.
MCT USE ONLY. STUDENT USE PROHIBITED
7-12 Implementing Data Storage in Azure
The following code shows how to create and use a custom retry policy.
};
await container.CreateAsync(options, null);
• Full public read access. Container and blob data can be accessed for reads via anonymous requests
but enumeration of containers in the storage account is blocked. Enumeration of blobs inside a
container, however, is permitted.
• Public read access for blobs only. Blob data can be accessed for read via anonymous request but
enumeration of blobs in a container is blocked.
To set a blob container policy, you have to create a BlobContainerPermissions object and set its
PublicAccess property to one of the BlobContainerPublicAccessType values. Finally, call the
SetPermissionsAsync method on the CloudBlobContainer object and pass the permissions object.
The following code shows how to set a blob access policy to Public read access for blobs only
Demonstration Steps
You will find the steps in the “Accessing Microsoft Azure Blob Storage from a Microsoft ASP.NET Core
Application “ section on the following page: https://github.com/MicrosoftLearning/20487D-Developing-
Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD07_DEMO.md.
All the information about the granted access levels, the specific resource, and the allotted time frame is
incorporated within the Shared Access Signature URL as query parameters. In addition, the Shared Access
Signature URL contains a signature that the storage services use to validate the request.
MCT USE ONLY. STUDENT USE PROHIBITED
7-14 Implementing Data Storage in Azure
It is possible to specify all access control information in the URL or to embed a reference to an access
policy. With access policies, you can modify or revoke access to the resource if necessary.
For more information about the structure of the Shared Access Signature URL, consult MSDN
documentation:
http://go.microsoft.com/fwlink/?LinkID=298849&clcid=0x409
To create a shared access signature for a file, call the GetPermissionsAsync method of a CloudFileShare
object, and add the permissions in the SharedAccessPolicies property.
The following code shows how to create a shared access signature for a file.
permissions.SharedAccessPolicies.Add(policyName, sharedPolicy);
await share.SetPermissionsAsync(permissions);
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated Time: 60 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD07_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD07_LAK.md.
Lesson 3
Working with Structured Data in Azure
In the previous lesson, you learned how to store unstructured data in Azure Storage blobs and files.
Unstructured data storage is simple and inexpensive, but it does not yield well to efficient querying and
updating. The Azure cloud platform provides numerous services for storing, querying, and updating
structured data, including SQL Database and Azure Cosmos DB. Both services are globally available and
can scale with your application’s demands, but have slightly different constraints, use cases, and APIs.
In this lesson, we will explore Azure SQL Database, Microsoft’s cloud-optimized, scalable, globally-
distributed version of the popular SQL Server database product. Then, we will discuss Azure Cosmos DB, a
novel database service that supports multiple types of API flavors in a single distributed platform with a
choice of data consistency strategies to fit your application’s architecture and business needs.
Lesson Objectives
After completing this lesson, students will be able to:
• Create an SQL Database and access it from a web application.
SQL Database provides three deployment flavors that you can use. Choosing the right flavor depends on
your business needs, cost and performance requirements, and more. The flavors are:
Single database. You create a new database and assign it performance resources (database transaction
units (DTUs) or vCores, discussed below). The platform guarantees that your database will receive the
necessary hardware resources to support the required load.
Elastic database pool. You create a database pool and assign it performance resources (DTUs). Then, you
assign one or more databases to the pool. The databases in the pool share the pool’s resources, so if one
database maxes out the pool’s resources, other databases will be throttled temporarily. Despite this risk,
the elastic approach is useful when you have varying degrees of loads across numerous databases, and
assigning each database a high number of DTUs would be unreasonably expensive.
SQL Database Managed Instance. You create a managed instance, which is essentially a standalone
managed database server. When using SQL Database Managed Instance, you have 100% compatibility
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-17
with the on-premises version of SQL Server, but you do not have to worry about manual database
backup, upgrades, security patching, and other concerns.
Note: In addition to using SQL Database, which is a PaaS offering, you can also deploy the
on-premises version of SQL Server to an Azure Virtual Machine. By doing so, you take on the
responsibility of managing the virtual machine instance, including operating system updates,
security patches, and database upgrades. This is still a reasonable choice in some scenarios, where
you need to lift-and-shift an existing deployment into Azure and perform more fine-grained
migration steps later.
For more information about the differences between SQL Database and the on-premises
version of SQL Server (which is also available in SQL Database Managed Instances), refer:
https://aka.ms/moc-20487D-m7-pg5
You assign performance resources to databases using one of two methods: database transaction units
(DTUs) or virtual cores (vCores). The Azure platform guarantees that your database will receive the
hardware resources required to support the desired load; if you exceed your resource allotment, you
might experience query degradation and throttling (although in many cases, the database will still operate
normally). Workloads from other databases, even your own, will not affect your database -- unless you’re
using an elastic pool.
DTUs: A DTU is a combination of compute, storage, and I/O resources required to service database
operations. To determine the resources required to support a single DTU, Microsoft uses the Azure SQL
Database Benchmark (ASDB), which runs a mix of basic operations for online transaction processing
(OLTP) workloads. There are various pricing tiers with different numbers of DTUs. For example, in the P15
tier, the maximum database size is 4TB, and the maximum number of concurrent requests is 6,400. In the
S0 tier, the maximum database size is 250GB, and the maximum number of concurrent requests is 60.
vCores: Under the vCore purchasing model, you pay for compute resources (virtual cores), data storage,
and the number of I/O operations. You can independently scale the compute and storage resources,
which may provide greater flexibility than the DTU-based pricing.
For more information about the SQL Database Benchmark used to determine DTU
performance equivalents, and how it might relate to your actual workload, refer:
https://aka.ms/moc-20487D-m7-pg7
To evaluate the DTU requirements of your on-premises database workloads, you can use the
SQL Database DTU Calculator, which collects performance counter data from your
on-premise machine and analyzes it to produce an estimate:
https://aka.ms/moc-20487D-m7-pg8
MCT USE ONLY. STUDENT USE PROHIBITED
7-18 Implementing Data Storage in Azure
The following screenshot shows the Azure portal dialog for creating a new SQL Database:
The following screenshot shows the Overview blade in the Azure portal, with the new database’s name
and performance metrics:
The following screenshot shows the firewall configuration dialog in the Azure portal, which you can use to
allow your client IP address access to the database (you will still need the database username and
password to authenticate):
MCT USE ONLY. STUDENT USE PROHIBITED
7-20 Implementing Data Storage in Azure
The following screenshot shows the Visual Studio SQL Server Object Explorer attached to the blueyonder
database:
The following screenshot shows the SQL query editor in the Azure portal, where you can run basic queries
and explore your database without leaving the browser:
Demonstration Steps
You will find the steps in the “Uploading an Azure SQL Database to Azure and Accessing it Locally “
section on the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-
Azure-and-Web-Services/blob/master/Instructions/20487D_MOD07_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
7-22 Implementing Data Storage in Azure
• Mongo DB. Manipulate data using the Mongo DB APIs, compatible with existing Mongo DB client
libraries. The data is stored in JSON documents.
• Cassandra. Use the Cassandra query language and protocol to manipulate data organized in a
tabular format. This is commonly referred to as a wide column store.
• Table. Store data in a simple key-value format, mimicking Azure Storage tables.
• Graph (Gremlin). Store graph nodes and edges and use the Gremlin query language with Open
Graph APIs.
For more information about Azure Cosmos DB supported API types, refer:
https://aka.ms/moc-20487D-m7-pg10
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-23
The following screenshot shows the Azure portal dialog for creating a new Azure Cosmos DB account, and
selecting the API you would like to use:
The following screenshot shows the Azure portal dialog for selecting global regions for your Azure
Cosmos DB account, and configuring their read/write/read-write status:
For more information on distributing data globally to multiple regions with Azure Cosmos
DB, refer:
https://aka.ms/moc-20487D-m7-pg11
You can scale Azure Cosmos DB accounts using one of two modes: fixed or unlimited. In fixed mode, your
account is limited to 10GB of storage capacity. Additionally, you configure your account with a
throughput limit of 400 - 10,000 Request Units (RU) per second. A Request Unit corresponds to a read
operation on a single 1KB document. In the unlimited mode, you can scale to an unlimited storage
capacity and a throughput limit of 10,000 - 100,000 RU/s.
For a more detailed explanation of how Request Units correlate to create, read, update, and
delete operations on documents, refer:
https://aka.ms/moc-20487D-m7-pg12
You can use the Azure Cosmos DB capacity planner to estimate the RU and data storage
requirements of your account:
https://aka.ms/moc-20487D-m7-pg13
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-25
The following screenshot shows the Data Explorer pane in the Azure portal, which you can use to
manipulate data stored in an Azure Cosmos DB account through the Mongo DB API:
The following code accesses the Azure Cosmos DB account using the .NET Mongo DB driver library, and
retrieves all the flights from Paris:
additional properties attached. Special query languages such as Cypher and Gremlin are used to query the
graph by exploring its relationships. For example, retrieving all nodes connected by a specific relationship.
The connected nature of a graph makes it easier to express certain types of queries in a graph database
compared to a relational or document database. For example, if your graph consists of Person nodes
representing social media users, there is a Friend edge between each two friends, and there is a Follow
edge between a user and another user or page that they follow, then you can easily perform queries such
as finding mutual friends between two users, or finding the friends of a friend who are interested in a
certain page, and so on.
The following screenshot shows the Azure Cosmos DB Data Explorer for accessing data in an Azure
Cosmos DB account configured with the Graph (Gremlin) API:
The following screenshot shows the Gremlin console connected to an Azure Cosmos DB account and
performing queries:
In this demonstration, you will create a new Azure Cosmos DB instance with MongoDB API in Azure portal
and use a script to create collections with some objects then run some queries on the objects.
Demonstration Steps
You will find the steps in the “Using Microsoft Azure Cosmos DB with the MongoDB API “ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD07_DEMO.md.
Demonstration Steps
You will find the steps in the “Using Cosmos DB with a Graph Database API“ section on the following
page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD07_DEMO.md.
• Partition tolerance. The system continues to function when facing network partitions, i.e. when
arbitrary messages are dropped or delayed by the network.
Because no modern distributed network is completely safe from network partitions, it follows from the
theorem that you must trade consistency for availability. In face of a network partition, your system will
either stop responding to requests (loss of availability to preserve consistency), or it might diverge into
separate views of the most recent data (loss of consistency to preserve availability).
One of the key effects of bringing the CAP theorem into the minds of software architects and engineers is
that distributed storage systems are now being designed with interesting consistency models and
distributed applications need to take advantage of these consistency models. Beyond just trading
availability for consistency, there are varying degrees of consistency that can have wildly different
performance characteristics. For example, strong consistency (where every read must receive the most
recent write) is quite expensive to achieve in a distributed system, and some applications might be able to
relax the consistency requirements to obtain better performance or lower costs.
Many distributed databases offer poorly-defined consistency guarantees, or only a choice between strong
and eventual (weak) consistency. Azure Cosmos DB provides five consistency models (strategies) which
you can choose from. Furthermore, you can choose a strong consistency strategy for your database and
then relax it and use a weaker strategy for specific operations where it would be beneficial for
performance. The consistency strategies supported by Azure Cosmos DB are:
• Strong. Reads are guaranteed to return the most recent version of an item. A write only becomes
visible after it is committed by a majority of replicas, and performing a read requires
acknowledgement from a majority of replicas as well. When using strong consistency, you can’t
associate your Azure Cosmos DB account with more than one region (because it would be
prohibitively expensive to consult a majority of replicas in real-time).
• Bounded Staleness. Reads lag behind writes by at most k versions of an item, or at most t seconds.
With bounded staleness, you can use more than one region. The read cost in terms of Request Units
is the same as with strong consistency.
• Session. Consistency is scoped to a single client session. It is guaranteed that the client can read its
own writes and that reads and writes are monotonic (for example, if one read in a given session
returned version 7 of an item, the next read of the same item cannot return a version earlier than 7).
The read cost in terms of Request Units is lower than with bounded staleness or strong consistency.
• Consistent Prefix. Eventual convergence of all the replicas is guaranteed if at some point writes are
stopped. Reads don’t see out-of-order writes. For example, if the write order was A, B, C, then client
reads might see A, B or A, B, C, but a client will not see B, A, C. The read cost in terms of Request Units
is the same as with session consistency.
• Eventual. Eventual convergence of all the replicas is guaranteed if at some point writes are stopped. A
client might read values older than ones it had seen before. For example, read version 7 of an item
and then read version 5 of the same item. This mode has the lowest read cost in terms of Request
Units compared to all the other options.
Although it might sound as though strong consistency is the only option for building a reliable, correct
distributed system, there are often reasons why the consistency model can be relaxed. For example, if it is
known that only a single client is updating a certain item (a specific player’s high score in a game that is
only installed on a single device), then using strong consistency is not required and session consistency
can be used instead.
For more information on the Azure Cosmos DB consistency levels, and to understand how to
choose the appropriate consistency model for your needs, refer:
https://aka.ms/moc-20487D-m7-pg19
For a more thorough explanation of various consistency guarantees in distributed data stores
through examples, refer:
https://aka.ms/moc-20487D-m7-pg20
MCT USE ONLY. STUDENT USE PROHIBITED
7-30 Implementing Data Storage in Azure
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated Time: 30 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD07_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD07_LAK.md.
Lesson 4
Geographically Distributing Data with Content Delivery
Network
Scaling services so that they are operating at their optimal level for users in different countries or
continents can be a challenge. Cloud platforms such as Azure provide several features to simplify the
process of scaling.
In this lesson, you will learn about the issues that apply to services that need to scale on a global scale and
how Azure can help.
Lesson Objectives
After completing this lesson, students will be able to:
• For users. Static content is delivered quickly and user experience is enhanced. Long round-trip times
are only required for accessing the actual dynamic portions of the application.
• For developers. Traffic to the application’s servers is reduced to only such requests that require
dynamic content. Scalability is enhanced and costs are lowered. It is the Content Delivery Network
that bears most of the traffic for the application.
Azure Content Delivery Network Premium from Verizon. The point-of-presence (POP) locations for these
Content Delivery Network offerings include dozens of cities on every continent (except Antarctica).
For more information on the individual features supported by each Content Delivery Network offering,
refer:
When creating a new Content Delivery Network endpoint, you specify the origin type for the endpoint, as
well as the resources under that origin that you would like to cache. The available origin types include:
The first time a specific object is requested from Content Delivery Network, it will be retrieved from its
origin and cached at the Content Delivery Network endpoint. It will subsequently be served directly from
Content Delivery Network. Note that differences in URL query string parameters are ignored by default
(treated as the same resource), but you can configure this behavior.
It's possible to provide the caching setting in the query string in the URI of the resource.
• Override. Ignore query string caching settings and cache with the rule provided duration.
• Set if missing. If no query string caching setting is provided, use the rule provided duration.
File compression
Content Delivery Network enables you to compress the files before they are sent to the users. This way
users get more a responsive experience and the network traffic is reduced to save costs.
For more information about Content Delivery Network file compression, refer:
https://aka.ms/moc-20487D-m7-pg26
Geo-filtering
Content Delivery Network enables you to restrict access to some resources from specific countries by
creating a rule.
For more information about geo-filtering with Content Delivery Network, refer:
https://aka.ms/moc-20487D-m7-pg27
MCT USE ONLY. STUDENT USE PROHIBITED
7-34 Implementing Data Storage in Azure
FIGURE 7.16: THE CREATE CDN PROFILE DIALOG BOX IN AZURE PORTAL.
MCT USE ONLY. STUDENT USE PROHIBITED
7-36 Implementing Data Storage in Azure
After Content Delivery Network has been created, navigate to the Content Delivery Network blade and
add a new Endpoint. In the origin type, choose the WebApp option and in Origin hostname, select
your website URL.
The following image is a screenshot of the Add an endpoint dialog box for the created Content Delivery
Network.
The following image is a screenshot of the menu options for the newly created endpoint.
For more information about using the Content Delivery Network dynamic site acceleration,
refer:
https://aka.ms/moc-20487D-m7-pg22
Demonstration Steps
You will find the steps in the “Configuring a CDN Endpoint for a Static Website“ section on the following
page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD07_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-39
Lesson 5
Scaling with Out-of-Process Cache
Distributed cache is a basic component for implementing high scale distributed applications. Application
servers can store a large set of information in a collection of servers forming a cache cluster. The
information is stored in-memory across the cluster to provide low latency and high throughput.
This module describes Azure Cache for Redis and the API for executing data access operations.
Lesson Objectives
After completing this lesson, students will be able to:
In high-scale scenarios, you have to store data in an independent data store that is accessible to all
computers that may request for it. One option is to store the data in the database, but each data access
will suffer from long delays. Another solution is to create a dedicated server for storing data in-memory
for all other execution machines. A single server is limited in its memory capacity and is unreliable by
design. Highly scalable applications often store much more memory then a single machine can handle
and cannot afford a single point of failure.
The solution is a distributed cache that spans over multiple servers. Data is stored in memory on multiple
machines so the cache can grow in size and in transactional capacity. However, clients work against a
single logical cache without knowing where the data is actually stored.
Caches can be useful to store temporary data. All data items in the cache are automatically removed
according to expiry periods and cleanup policy. The developer is free from handling garbage collection of
MCT USE ONLY. STUDENT USE PROHIBITED
7-40 Implementing Data Storage in Azure
unnecessary data stored in the cache. Applications can store intermediate data in the cache, use it in their
calculations, and then forget about it. It will be automatically cleaned.
Distributed caches simplify the execution of parallel tasks across servers in high-performance computing
or map-reduce applications. A complex job can be divided into simpler tasks, distributed across servers
and executed in parallel. Intermediate results produced by such tasks can be stored in the cache before
being used by other tasks in the execution flow.
If data reliability is required, you can use replication and store the same data on multiple cache servers. If
one server fails the data will be still available.
With distributed cache, you can improve the performance of high-scale applications that span multiple
servers. Distributed cache is as simple to use as traditional in-memory cache but can grow in size
according to demand and can serve multiple applications simultaneously.
Applications such as ASP.NET websites deployed on a web farm with multiple servers can store their
session state in a distributed cache and gain fast data access across the web farm as well as automatic
cleanup.
If you are using the Premium pricing tier, you can create clusters that consist of more than 53 GB
individual caches, and shard data across multiple Redis nodes. You can also configure persistence to
persist your cache to a Storage account, achieving resiliency and faster startup times so the cache is
immediately populated.
To create a new Azure Cache for Redis, you can use the Azure portal, Azure Resource Manager templates,
Azure PowerShell, or the Azure CLI. You can subsequently configure the cache by using all of these
methods as well. The following screenshot demonstrates the new cache configuration dialog box in the
Azure portal, which you can find under + Create a resource > Databases > Redis Cache. To submit a
request for cache creation, which is performed within minutes, click Create.
The following image shows the New Redis Cache blade for creating a new Azure Cache for Redis.
MCT USE ONLY. STUDENT USE PROHIBITED
7-42 Implementing Data Storage in Azure
To access Azure Cache for Redis, use the same Azure Cache for Redis client API as for an on-premises
cache. In the next topic, “Using Azure Redis Cache from Code,” we will use the StackExchange.Redis
NuGet package to access the Azure-hosted Cache for Redis.
https://aka.ms/moc-20487D-m7-pg29
Demonstration Steps
You will find the steps in the “Using Microsoft Azure Redis Cache for Caching Data“ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD07_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
7-44 Implementing Data Storage in Azure
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated Time: 30 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD07_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD07_LAK.md.
Review Question
Question: You have been approached by an online educational organization and asked to
design an application for tracking student activity. How would you use Storage for this task?
MCT USE ONLY. STUDENT USE PROHIBITED
7-46 Implementing Data Storage in Azure
MCT USE ONLY. STUDENT USE PROHIBITED
8-1
Module 8
Monitoring and Diagnostics
Contents:
Module Overview 8-1
Module Overview
In the real world, most application failures often occur only in production environments and not on the
developer’s machine. Understanding why applications fail and obtaining as much information as possible
from the runtime environment is of paramount importance to operations engineers and developers
looking to resolve bugs or understand application performance. Additionally, security concerns frequently
require collecting audit information from production machines for accountability and analysis purposes.
This module discusses tracing, with a focus on web service tracing and on auditing technologies provided
by Microsoft Azure. The module begins with tracing in the Microsoft .NET Framework by using
System.Diagnostics, and then describes tracing in web service infrastructures such as Windows
Communication Foundation (WCF) and Microsoft ASP.NET Web Application Programming Interface (API).
Finally, it explains the information you can get from the host with Microsoft Internet Information Services
(IIS), as well as Azure monitoring and diagnostics.
Note: The portal UI and Azure dialog boxes in Microsoft Visual Studio 2017 are updated
frequently when new Azure components and SDKs for .NET are released. Therefore, it is possible
that some differences will exist between screenshots and steps shown in this module, and the
actual UI you encounter in the portal and Visual Studio 2017.
Objectives
After completing this module, you will be able to:
• Perform tracing in the .NET Framework with the System.Diagnostics namespace.
Lesson 1
Logging in ASP.NET Core
The most common type of diagnostic data you can expect from a production system is logs. There are
numerous ways to emit log messages (or traces) and many ways to format, store, and analyze them; many
of you know the feeling of chasing a bug through thousands of lines of logs. Later in this module, we will
discuss some alternative approaches to monitoring and diagnostics, which do not require parsing
extensive log messages from your application. Nonetheless, you can find and fix some problems only by
carefully checking logs and traces and correlating them to issues in the application code and
configuration.
In this lesson, we will explore the ASP.NET Core logging framework, which is easy to use, extensible, and
ships with a large number of built-in logging providers. We will emit logs to various providers and learn
how to stream diagnostic logs from ASP.NET Core services that run in the Azure App Service.
Lesson Objectives
After completing this lesson, you will be able to:
• Explain how to emit log messages from various ASP.NET Core application components.
• Explain how to configure logging levels, categories, scopes, and structured logs.
• Write messages to various logging providers, including Event Tracing for Windows (ETW).
• Use third-party logging providers with the ASP.NET Core logging API.
• Describe streaming diagnostic logs from an application that run in the Azure App Service.
• IsEnabled. Returns whether a specific log level is enabled, so that you can avoid generating expensive
log data if the log message will not actually be written anywhere.
• BeginScope. Begins a logical operation scope, which can group associated logs and make them
easier to understand later. (Scopes are also discussed in Topic 2, “Advanced Logging Configuration.”)
To obtain an ILogger object, you can use an ASP.NET Core dependency injection. This is the easiest way
and it works very well for controller methods that need a logger. A more advanced approach is to obtain
an ILoggerFactory object, which you can use to configure where your logs are written, and then create
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-3
an ILogger object from the logger factory. You can use an ASP.NET Core dependency injection to obtain
the logger factory, or create a new LoggerFactory object that implements the ILoggerFactory interface.
The following code shows an ASP.NET Core controller that uses dependency injection to obtain an
ILogger object:
In the preceding example, an ILogger object is injected to the FlightsController constructor by the
ASP.NET Core dependency injection infrastructure. Then, the LogInformation and LogWarning methods
are used to write log messages. Note that the format string passed to these methods is not a standard
String.Format format string (for example, "id = {0}"), and it’s not a C# interpolated string either (for
example, $"id = {id}"). It is a custom format used by the ASP.NET Core logging API.
The generic type parameter of the ILogger object injected to the FlightsController constructor specifies
the logger’s category, which you can use to easily parse all the log messages from a specific area in the
application’s code. You could also inject the non-generic ILogger interface, which would be associated
with a default category. Categories are discussed in Topic 2, “Advanced Logging Configuration.” Note that
you can assign ILogger<T> to ILogger, and use the ILogger object (as in the above example). The
generic type parameter is only used to determine the logger’s category.
Note: ASP.NET Core will write its internal logs to the logger factory that it creates internally,
even if you create additional logger factories. If you want to create your own logger factory and
have ASP.NET Core write its logs to it, you’ll need to call ApplicationLogging.ConfigureLogger
with your logger factory object, and then set the AppplicationLogging.LoggerFactory property
to the same object.
The following code shows an ASP.NET Core controller that creates its own ILogger object from an
injected ILoggerFactory object:
_logger = factory.CreateLogger(
"BlueYonder.Flights.FlightsController");
}
https://aka.ms/moc-20487D-m8-pg1
Each log message that you write is associated with a log category. The category comes from your ILogger
object, and you can specify it when you create the logger with ILoggerFactory.CreateLogger. By
convention, the category is the fully-qualified name of the class writing the logs. As explained in the
previous topic, you can have ASP.NET Core inject an appropriately configured logger object by accepting
a constructor parameter of the generic type ILogger<T>. For example, if your constructor accepts a
parameter of type ILogger<FlightsController>, ASP.NET Core will create and an inject a logger
configured with a category equal to the fully qualified name of the FlightsController class.
Each log message you write is associated with an event ID. The various Log… methods (such as
LogInformation) have overloads that accept an event ID as the first argument. An event ID is an integer
value that you can freely assign, and it serves the purpose of associating related log events together. For
example, a log message for completing a new flight reservation can have log ID 4000, and a log message
for canceling a flight reservation can have log ID 4001. Using event IDs makes automatic event parsing
easier for log processors and business intelligence tools.
Finally, each log message that you write can use a message template. You can also use plain strings that
you format yourself, but it is recommended to use a template that contains placeholders for useful but
variable data, such as flight numbers, reservation identifiers, and hotel addresses. The key benefit of using
a template is that you can store the variable data separately from the string message, and analyze the
data without parsing the complete string. This makes filtering, sorting, and various aggregations much
easier for log processing and analysis tools. In Lesson 2, “Diagnostic Tools,” Topic 3, “Overview of Event
Tracing for Windows (ETW),” we will discuss the value of semantic logging, where a log entry is not just a
plain string, but a structured payload. By specifying event IDs and message templates, you can use
structured (semantic) logging with any log provider, and not just ETW.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-5
The following code shows how to use event IDs and message templates to emit user-friendly but also
machine-parsable structured logs:
Flight = …;
if (flight == null)
{
_logger.LogWarning(FlightReservationNotFound,
"Reservation {res} could not be found", reservationId);
return NotFound();
}
_logger.LogInformation(FlightReservationCancelled,
"Successfully cancelled reservation {res}", reservationId);
return Ok();
}
The ASP.NET Core logging API also offers logging scopes. In many cases, you have a set of log messages
associated with a single logical activity in your application, such as booking a flight or canceling a hotel
reservation. A logging scope aggregates log messages together, and if you use an appropriate logging
provider and log viewer, makes it easier for you to see the logical structure and hierarchy of logs in the
same scope. You can also use a logging scope to attach the same set of contextual information, such as a
request ID or transaction ID, to all logs in the same scope.
To create a logging scope, use the ILogger.BeginScope method, which returns an IDisposable object.
Logs written inside the scope are associated with the scope, until you dispose it. Logging scopes can be
nested, and in fact ASP.NET Core creates a logging scope for each controller method call, which includes
the request identifier, request path, and the name of your controller’s method.
The following code shows how to use a logging scope to aggregate multiple log messages under a single
scope:
int seatsAvailable = …;
_logger.LogInformation(
"Seats available at requested fare: {seats}", seatsAvailable);
if (seatsAvailable <= 0)
{
_logger.LogWarning("No seats available");
}
// … More code
}
MCT USE ONLY. STUDENT USE PROHIBITED
8-6 Monitoring and Diagnostics
• Azure App Service. This provider logs messages to files in the web apps file system in Azure App
Service. It logs messages to Azure Storage blobs if in a configured Azure Storage account.
The following code shows how to add built-in logging providers to your ASP.NET Core application by
configuring the web host builder in the application’s startup code:
host.Run();
The following is an example of the output from a console provider when it runs one of the code examples
from the previous topic:
Note: You can also use the ILoggingBuilder.AddConfiguration method to read log
configuration settings from a configuration file, instead of specifying the logging providers and
levels in code.
Demonstration Steps
You will find the steps in the “Demonstration: Recording logs to the Console and EventSource providers“
section on the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-
Azure-and-Web-Services/blob/master/Instructions/20487D_MOD08_DEMO.md.
To configure Serilog for your ASP.NET Core application, install the Serilog.AspNetCore NuGet package
and some additional packages based on the sinks that you want to use. For example, the console sink is in
the Serilog.Sinks.Console NuGet package. Then, you call the UseSerilog method in your web host
builder’s configuration and configure Serilog’s sinks, in turn.
MCT USE ONLY. STUDENT USE PROHIBITED
8-8 Monitoring and Diagnostics
The following code shows how to configure Serilog with the ASP.NET Core web host builder:
host.Run();
Demonstration Steps
You will find the steps in the “Demonstration: Using Serilog“ section on the following page:
https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD08_DEMO.md.
For more information on the Azure App Service logging provider, go to:
https://aka.ms/moc-20487D-m8-pg5
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-9
To configure logging to the file system and/or to an Azure Storage blob, you use the Diagnostic logs
pane under the MONITORING section of your Azure App Service’s settings on the Azure portal. Your
changes are applied immediately, and you don’t have to restart the application to get access to the logs.
The following screenshot shows the Diagnostics logs pane on the Azure portal. You can use the
Diagnostic logs pane to enable application logging to the file system or Azure Storage blobs:
The following screenshot shows the Log stream pane in the Azure portal:
Lesson 2
Diagnostic Tools
Understanding the performance profile and behavior of your web services is critical for successful testing
and production deployments. Backend services that do not perform well cause upstream problems for
other services that depend on them, and user-facing services that don’t perform well cause immediate
customer frustration with your system. The .NET runtime provides a variety of performance and diagnostic
information that you can use in development and testing to improve the performance of your service. You
can also use .Net runtime diagnostic information in production environments to monitor the health of
your service and respond accordingly.
In this lesson, we will explore the performance diagnostic facilities built into ASP.NET, IIS, and .NET Core
across both major operating system platforms: Windows and Linux. We will see how to monitor
application performance in production and record performance traces that can be analyzed later.
Lesson Objectives
After completing this lesson, you will be able to:
• Explain the key benefits and use cases for Windows performance counters.
• Explain how to Collect and monitor IIS and ASP.NET performance counters.
• Explain the architecture of ETW.
Here are some examples of useful Windows performance counters (more performance counters will be
discussed in Topic 2, “ASP.NET and IIS performance counters”):
• Memory\Available MBytes
To view and record performance counters, you can use the built-in Windows Performance Monitor
(perfmon). It can show the current values of the performance counters you specify, record them to a file
for later viewing, and open existing recordings. When recording performance counters, you can use
various file formats such as simple Comma-Separated Values (CSV), which you can easily import to
Microsoft Excel and similar software.
The following is a screenshot of the main Performance Monitor window, which is monitoring a few
performance counters:
• ASP.NET\Requests Current
• ASP.NET\Application Restarts
Because they are Windows-specific, performance counters are not supported by .NET Core. It means that
if you use the cross-platform .NET Core runtime, you will not see .NET-specific performance counters
(such as GC behavior) exposed from your application process. However, if you’re running on Windows,
you can switch your ASP.NET Core or .NET Core application to use the full .NET Framework runtime, which
will expose the traditional .NET performance counters. To do so, you only need to change the target
framework for your main project.
You can create custom performance counters in your own application code by using the
PerformanceCounterCategory and PerformanceCounter classes from the System.Diagnostics
namespace. You can use these classes to augment the set of performance monitoring data collected from
the system with application-specific insights that might further help pinpoint the problem. You can also
use the PerformanceCounter class to programmatically read performance counter values from system
counters, which can be used for self-diagnostics and reporting.
Note: In late 2017, performance counter support (including the relevant classes in
System.Diagnostics) was merged into .NET Core. As a result, you can now use the Windows
Compatibility Pack to add performance counter support to .NET Core applications. However, this
does not provide support for performance counters on non-Windows platforms.
For more information about the .NET Core Windows Compatibility Pack, and how it can be
used to help porting Windows applications to .NET Core, go to:
https://aka.ms/moc-20487D-m8-pg10
The following code example shows how to create a new performance category with two performance
counters, and then update them from the application code:
PerformanceCounterCategory.Create(
"Blue Yonder Flights",
"Counters for the Blue Yonder flights reservation service",
counters);
The preceding code example adds two performance counters with help descriptions to a single category
called “Blue Yonder Flights,” and then creates that category. After successfully creating the category, the
code updates the first counter to a specific value.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-15
The .NET Core runtime does not support performance counters because they are a Windows-only
mechanism. If you need a similar mechanism that would work across all the platforms supported by .NET
Core, you should use event counters. Event counters are similar to event sources, but they provide just a
single counter value. Later in this lesson, you will learn about ETW and LTTng, which are the
implementation libraries behind event counters and event sources.
To learn more about event counters, refer to the event counter tutorial at:
https://aka.ms/moc-20487D-m8-pg12
Overview of ETW
As we have seen earlier in this lesson, obtaining
diagnostic data about a running system or
application is critical for its proper development,
testing, and operation. Performance counters are
a valuable tool in getting diagnostic data about
your system, but they can’t cover certain
scenarios. Specifically, performance counters are
not a good fit if:
• You need high-resolution data about events
that happen at a high frequency, such as
individual HTTP requests or individual
exceptions thrown. Performance counters can
only provide an aggregate.
• You need additional information about events that are more than just a single numeric value. For
example, you need the URLs of individual HTTP requests or the names of frequently accessed disk
files. Performance counters can only provide numeric information.
ETW is a Windows operating system component that is implemented in the Windows kernel. It is a high-
performance event tracing framework designed for rates of tens of thousands of events per second, with a
reasonable sustained CPU overhead. Numerous Windows components and higher-level application
frameworks (including .NET Core, Task Parallel Library, and IIS) are instrumented with ETW support, and
can provide diagnostic data about their internal operations by using ETW events.
ETW events have a well-defined structured payload, which is one of the key differences between them
and plain log messages. For example, instead of emitting a log message such as “Received new flight
reservation EWR-YYZ on flight BY 005, fare class W, passenger name Mr. David Smith” as a plain string,
you would define an event payload called NewFlightReservationEvent with the following fields, and
emit it through the ETW infrastructure:
MCT USE ONLY. STUDENT USE PROHIBITED
8-16 Monitoring and Diagnostics
• Number (int) = 5
• Origin (string) = “EWR”
As a result, it is very easy for tools to parse ETW events and understand their contents, which helps with
filtering, sorting, aggregation, and other tasks that are difficult to perform on unstructured log data
without having to parse and interpret it first. This paradigm is called structured logging, or semantic
logging, and it is becoming more and more common in modern tracing frameworks that are designed for
producing and retaining large amounts of trace data for subsequent analysis.
Note: You can design and provide application-level ETW events by using the EventSource
class, which was discussed in Topic 4, “.NET-related ETW events.” Recording application-level
events alongside with system events can help diagnose complex problems by tracing data flow
and events throughout your application stack.
• Providers. ETW providers emit events with well-defined, structured, and discoverable payloads. The
events are not stored or copied anywhere by default; a provider has to be enabled for tracing to
occur.
• Sessions. ETW sessions store events written by providers in a set of buffers, which can be directed to a
file on disk or discarded when the buffer becomes full.
• Controllers. ETW controllers create a session, and then enable specific providers to write events into
the session. A provider may write events into more than one session.
• Consumers. ETW consumers process ETW events. Events can be processed from an on-disk file (.etl,
Event Trace Log) or a real-time memory buffer to which they are written by one or more providers.
In many cases, you will use ETW to record a set of events to a file, and then open that file with dedicated
analysis tools. However, it can also be very useful to process ETW events in real-time, without having to
record them to a file. This enables continuous monitoring and aggregation without the additional
overhead of writing high-frequency events to disk. Numerous monitoring frameworks (including
Application Insights, discussed in Lesson 3, “Application Insights”) use ETW behind the covers to
implement accurate low-overhead instrumentation and diagnostics.
Some common tools that you will use when working with ETW are:
• PerfView. An open source multi-tool that can be used to record and analyze ETW events, supports
table and flame graph visualizations, and understands a variety of event formats. PerfView is
discussed in Topic 4, “.NET-related ETW events.”
• Windows Performance Analyzer. A graphical tool that reads and analyzes .etl files, and supports
multiple types of advanced visualizations.
• Windows Performance Recorder. A combination GUI/console tool that records ETW events to a file
based on a configuration that you specify.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-17
Note: When recording an ETW event, you can also capture the application call stack that
led to the generation of this event. For many types of events, the call stack is an extremely
valuable piece of information. For example, consider an event generated when a file is written to
disk: having just the event data would be useful, but knowing where in the application code the
file was being written can be even more useful.
• AssemblyLoad. Emitted when a .NET assembly is loaded, and includes the assembly name, version,
and load path.
• GCStart, GCEnd. Emitted when a garbage collection starts and ends, and includes the generation
being collected and the GC reason.
• ContentionStart, ContentionStop. Emitted when a managed thread starts to wait for a lock and
when the thread acquires the lock. This event includes the lock being waited for and the waiting
thread.
• GCAllocationTick. Emitted for every 100KB (approximately) of allocated memory, and includes the
type of the last allocated object and the amount of allocated memory.
You can include ETW events in your application by using the EventSource class in the
System.Diagnostics.Tracing namespace, which is available in .NET Core and in the full .NET Framework
(as of .NET 4.5). This class handles the low-level details of interacting with the operating system, and
provides a clean API for defining the event payload and writing events with a minimal effort. What’s more,
if you use the EventSource class in a .NET Core application, it will automatically use ETW when running
on Windows, and LTTng (discussed in Topic 5, “LTTng events in .NET Core on Linux”) when running on
Linux.
The following code example defines a set of ETW events by using the EventSource class, and then emits
them from the application code:
MCT USE ONLY. STUDENT USE PROHIBITED
8-18 Monitoring and Diagnostics
In the preceding example, the FlightQueriesEventSource class derives from EventSource, and defines
two public methods called QueryStarted and QueryCompleted. These methods and their parameters
automatically form the structured event payload for two events. Finally, the application code needs to call
only these methods because the underlying ETW infrastructure is handled by the
EventSource.WriteEvent method.
To record .NET ETW events in PerfView, you can use the Collect > Run or Collect > Collect menu items.
To record custom providers from your application, you need to specify their names in the Additional
Providers box. After collecting events, you can view them using PerfView’s rich reporting facilities, which
include general statistics (such as garbage collection events) and individual event data.
The following screenshot depicts PerfView’s main collection dialog box, where you specify which events
you’d like PerfView to record:
The following screenshot depicts PerfView’s main window after expanding the recording performed in the
previous step:
The following screenshot depicts PerfView’s GCStats report, which can be used for diagnosing high
garbage collection rates and pause times:
You can download PerfView from the GitHub repository Releases page, where the project is
developed and maintained:
https://aka.ms/moc-20487D-m8-pg15
For more information on using PerfView to record and analyze ETW events, refer to this
series of video tutorials on Microsoft Channel 9 from PerfView’s author, Vance Morrison:
https://aka.ms/moc-20487D-m8-pg16
LTTng (Linux Trace Toolkit, next generation) is an open source project that was first released in 2005.
LTTng provides correlated application and system tracing support. LTTng works on a variety of Linux
distributions, including the distributions supported by .NET Core (such as Ubuntu, Red Hat Enterprise
Linux, and others). The LTTng architecture is fairly similar to ETW, although instead of relying on a kernel
component for collecting application events, it employs a user-space component. LTTng also has some
interesting features, which are not supported by ETW. An example for such a feature is relaying trace data
to a different machine. On the other hand, one ETW feature that is missing from LTTng is the ability to
record application call stacks with events.
You can install LTTng from package repositories for various distributions. LTTng installs a daemon
(background service), which collects data from running sessions and pushes it to files. It also installs the
lttng command-line tool, which you can use to create a session, add events to the session, start recording
the session, and stop the session when you’re done.
The following code example shows how to install LTTng on Ubuntu and Red Hat Enterprise Linux, the two
common Linux distributions supported by .NET Core:
By default, .NET Core on Linux does not emit runtime and application events to LTTng. You can control
this behavior by setting the COMPlus_EnableEventLog environment variable to 1 prior to launching your
application. You can’t change this setting if you have already started the application; you will need to
restart the application for the change to take effect.
The following code example shows how to launch an application with the COMPlus_EnableEventLog
environment variable set appropriately, and then use the lttng tool to record the ExceptionThrown CLR
event:
# Add context data (process id, thread id, process name) to each event
lttng add-context --userspace --type vpid
lttng add-context --userspace --type vtid
lttng add-context --userspace --type procname
COMPlus_EnableEventLog=1 ./myapp
By default, LTTng records events to a series of files that are placed in a directory that you specify. To view
the collected data, you can use several viewer tools. A very simple command-line tool for viewing LTTng
traces is babeltrace, which can read the LTTng output (in Common Trace Format, CTF). Another option is
the Trace Compass tool, which can visualize trace data and events. Lastly, if you create a .zip archive of all
LTTng’s recording directory, you can copy it to a Windows machine and open it by using PerfView.
The following code example shows how to use the babeltrace tool to display events and the output it
produces on a sample trace:
Microsoft provides the perfcollect script, which can be used to record LTTng events and put
them in an archive that you can access by using PerfView. The perfcollect script is available
on GitHub: https://aka.ms/moc-20487D-m8-pg18
Demonstration Steps
You will find the steps in the “Demonstration: Collecting ASP.NET Core LTTng events on Linux“ section on
the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/blob/master/Instructions/20487D_MOD08_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-23
Objectives
After you complete this lab, you will be able to:
• Collect and analyze ETW events with PerfView for ASP.NET Core application on windows.
• Collect and analyze LTTng events for ASP.NET Core application on Linux Docker container.
Lab Setup
Estimated Time: 30 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD08_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD08_LAK.md.
Lesson 3
Application Insights
Traditionally, monitoring and performance tools have focused on hardware resource consumption, such
as CPU utilization and memory usage, and trivial black-box performance metrics, such as the average
response time for a request to a specific server. With the advent of complex, distributed systems that
consist of tens or hundreds of inter-dependent services, it has become increasingly difficult to understand
the causes of increased resource consumption or degraded response times.
In this lesson, we will discuss Application Insights, an application performance monitoring tool provided
by Microsoft and hosted on Azure scale. By using Application Insights, you can go beyond monitoring
hardware resources or single-system utilization, and focus on the holistic behavior of the entire system,
trace a request as it crosses multiple services and databases, and truly understand outliers and
problematic events.
Lesson Objectives
After completing this lesson, you will be able to:
• Explain the types of application telemetry provided by typical web services.
• Transaction Monitoring and Tracing. These map and measure all the services and database systems
involved in executing a single business transaction, such as making a flight reservation and upgrading
a hotel room booking. This helps understand performance problems in low-level components and
map them to issues experienced by real users.
• Analytics and Forecasting. These present high-level statistics on commonly executed paths in the
application, user interaction patterns (for example, navigation flows in a mobile app), error rates, and
other interesting metrics. Predictive analytics tries to use the collected data to forecast future
behavior, which is important for capacity planning and projecting business growth.
• Runtime-Specific Analysis. This uses special instrumentation agents to monitor the performance of
high-level runtimes, such as .NET, Java, and Node.js, and present interesting findings, such as
exceptions thrown, garbage collection performance, and threading behavior and efficiency.
The data collected by an APM tool usually originates at the following sources:
• Servers and infrastructure. A monitoring agent can be installed on the target machine and collect
performance metrics, or cloud diagnostic tools (such as those available in Microsoft Azure) can
provide information on hardware resource utilization.
• Web application or service. Middleware, embedded into the web application or the web server itself,
can report important information on HTTP response latency, the types of HTTP status codes returned,
and the commonly accessed URLs.
• Application code. Applications and services can use a special diagnostic API (provided as part of the
APM’s library) to emit custom events and metrics.
• Application and server logs. The APM tool can aggregate and collect the log messages reported by
the application or the web server.
• Browser events. A JavaScript instrumentation library can send data to the server about browser
performance events, such as page load and rendering times, and HTTP response times as seen by the
client.
Application Insights is a complete, robust, scalable APM solution for web services and applications that
use various languages and runtimes such as .NET, Java, and Node.js. Application Insights is hosted and
scaled automatically by Microsoft Azure, but you can use it to monitor the performance of on-premises or
cloud applications. Application Insights is not just a data collection module. It provides a powerful
analytics dashboard that automatically detects anomalies, can perform profiling and load testing, and can
help explain issues that users experience when using your web application or service.
• Client HTTP requests and responses, including status codes, latency, failure rates, client location
information, and headers.
• HTTP requests and responses to any services on which your application depends.
• Exceptions and errors in both server processes (such as .NET) and browser applications.
• Webpage performance as reported by web browsers (for example, page rendering times).
• Raw system performance data, such as Windows performance counters or Linux metrics.
• Azure diagnostic data for Azure Virtual Machines, Azure App Services, and other sources.
In addition to standard APM features such as dashboards, statistics, and analytics, Application Insights
offers several distinguishing features that help developers find the root cause of performance problems
and errors:
• Application Insights Profiler. The profiler runs in the background for a few minutes per hour and
uses low-overhead profiling techniques to show hot methods that take a long time to service
requests in your applications.
• Application Map. Application Insights automatically maps your dependencies and database calls,
and shows an interactive, navigable map of your application.
• Snapshot Debugger. You can set trace points in an application running in production, and the
debugger will capture the stack trace and values of parameters and values that you specify, so you
can refer to them in Visual Studio 2017.
For an introduction to Application Insights and numerous links to other resources, tutorials,
documentation, and videos, go to: https://aka.ms/moc-20487D-m8-pg19
To add Application Insights to a live Azure web app (in Azure App Service), you enable Application
Insights from the web app’s blade in the Azure portal. This automatically turns on Application Insights
monitoring, which will collect HTTP response data, exceptions thrown, dependencies accessed by the
application, system performance data, and more.
The following screenshot illustrates how to add a new Application Insights resource to an existing, live
web app in Azure App Service. After adding the resource, the application is restarted and automatically
monitored by Application Insights.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-27
The following screenshot depicts the Live Stream feature in Application Insights, showing live real-time
data from the monitored web app.
For more information on using Application Insights with live web apps hosted in Azure App
Service, go to: https://aka.ms/moc-20487D-m8-pg20
For more information on Application Insights dashboards, navigating them, and customizing
them for the needs of your specific resources, go to: https://aka.ms/moc-20487D-m8-pg21
To add Application Insights to your ASP.NET Core web service at the time you develop it, use the Visual
Studio wizard located in Project > Add Application Insights Telemetry. The wizard adds a NuGet
package to your project and integrates into the ASP.NET Core pipeline. It will then collect data on HTTP
requests and responses, .NET exceptions, traces and logs, and more. In addition, the Visual Studio wizard
will host Application Insights in an on-premises environment, or publish it to Azure, regardless of whether
you ran the application locally during development.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-29
The following screenshot shows the Visual Studio wizard for adding Application Insights to a newly-
created ASP.NET Core web application, where you can configure the Application Insights resource, billing,
and other settings:
For more information on using Application Insights with ASP.NET Core web applications and
services, go to: https://aka.ms/moc-20487D-m8-pg22
Demonstration Steps
You will find the steps in the “Demonstration: Integrating and viewing Application Insights“ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD08_DEMO.md.
• TrackEvent: This method is for tracking generic events and user actions such as button clicks or
transitions between text boxes in a form.
• TrackMetric. This method is for tracking generic performance metrics such as the number of threads
processing a specific request.
• TrackException. This method is for tracking exception information and stack traces.
• TrackRequest. This method is for tracking all types of requests performed by the server, supporting
latency analysis on request duration and frequency.
• TrackDependency. This method is for tracking calls and durations to any external component, such
as a database, a storage system, or a web service.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-31
To start using the Application Insights API from your .NET application, add the Application Insights SDK to
your project, and then create an instance of the TelemetryClient class. Automatically, the Application
Insights instrumentation key from your appsettings.json will be used to send data to the appropriate
Application Insights resource.
The following code example demonstrates how to create a new instance of the TelemetryClient class:
The following code example demonstrates how you can track a database query to an external database,
which might not be supported by Application Insights for some reason:
In the preceding example, no additional information other than the operation title is provided to
Application Insights. Nonetheless, when running in an ASP.NET Core context, Application Insights will
automatically track the HTTP request being handled, and some additional information. For further
customization (to include additional data in the event), set properties on the DependencyTelemetry
class. For example, you might want to set the Data property to the database query performed. You can
also add arbitrary application-defined values to the Properties and Metrics dictionary properties.
The following example demonstrates how you can track an arbitrary custom event, such as cancelling a
flight reservation:
The following example demonstrates how you can track a custom performance metric, such as the
number of threads currently servicing requests in a custom thread pool implementation:
Telemetry.TrackMetric(metric);
}
Note: During development, you might want to temporarily disable telemetry. You can do
so by setting the TelemetryConfiguration.Active.DisableTelemetry property to true.
Alternatively, you might want to use a separate Application Insights resource for development or
to test telemetry, to avoid getting it mixed with the production telemetry data.
When performing high-frequency data ingest, you might want to use sampling to reduce
traffic and data costs for your Application Insights resource. For more information on using
sampling, go to: https://aka.ms/moc-20487D-m8-pg24
To learn more about the Azure Log Analytics query language, go to:
https://aka.ms/moc-20487D-m8-pg25
The following query retrieves the top 10 countries by traffic in the past 24 hours, by starting from the
requests table, and then adding a filter by timestamp, grouping by the client’s country or region
(determined automatically from their IP address), and rendering the results as a pie chart:
The following screenshot depicts the result of executing the above query in the Application Insights
portal:
The following screenshot depicts the result of executing the above query in the Application Insights
portal:
In addition to just looking at the requests and errors in your application itself, you can also analyze data
from any external dependency calls performed by your application. For example, if your service makes
HTTP requests to another service, or if your service uses table storage, databases, and other external
resources, this data is tracked in the dependencies table.
The following query extracts all failed requests to dependencies of type SQL (which are databases) and
groups the operation by the SQL query executed, from the data column:
The following screenshot depicts the output of the above query, illustrating that over the last 24 hours,
there were 1,435 failed SQL statements with the same text—inserting a value into the ServiceTickets
table:
For more information on Application Map and the next-generation Composite Application
Map (which is in preview at the time of writing), go to: https://aka.ms/moc-20487D-m8-pg26
Demonstration Steps
You will find the steps in the “Demonstration: Viewing application dependencies and request timelines “
section on the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-
Azure-and-Web-Services/blob/master/Instructions/20487D_MOD08_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-35
Note: If you don’t have a Visual Studio Team Services account, you will need to create one.
The Azure portal will automatically suggest that you create an account, or help pick one of your
existing accounts that can be associated with the Azure App Service.
The following screenshot depicts the Performance test tab for a web application hosted in Azure App
Service in the Azure portal:
The following screenshot depicts the performance test configuration dialog box, where you can specify
the duration of the load test and the simulated user load:
Note: You can easily overload an important service (in other words, carry out a denial of
service attack), by using load tests on the Azure load testing infrastructure. Take great care to
only test systems under your direct control, never test production instances serving an important
load, and make sure other people in your organization are aware of the load test.
For more advanced load testing scenarios, you should use Visual Studio Team Services
directly. Refer to the quick start and documentation at: https://aka.ms/moc-20487D-m8-pg27
MCT USE ONLY. STUDENT USE PROHIBITED
8-38 Monitoring and Diagnostics
Objectives
After you complete this lab, you will be able to:
Lab Setup
Estimated Time: 30 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD08_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD08_LAK.md.
Best Practice
Invest considerable time in instrumenting your application with tracing and performance counters. Make
sure you can successfully monitor the application in the development environment. This will make it easier
to monitor in Azure, and guarantee that you can diagnose problems, such as under heavy load, which
occur only in the production environment.
Review Question
Question: How can you monitor applications running in Azure?
Tools
• Microsoft Visual Studio 2017
Module 9
Securing Services On-premises and in Microsoft Azure
Contents:
Module Overview 9-1
Module Overview
Security is a major concern for many distributed applications. Key security issues that you must address
when you design a web service include authentication, authorization, and secured communication.
Managing identities in distributed systems can be challenging. Identities are often shared across
application and organization boundaries. Claims-based identity is a modern approach designed to
overcome these challenges in distributed systems. This module describes the basic principles of modern
identity handling. The module also demonstrates how to use infrastructures such as Microsoft Azure
Active Directory (Azure AD) to implement authentication and authorization with claims-based identity in
Microsoft ASP.NET Core applications. The module covers both intra-organization authentication and B2C
authentication scenarios.
By applying the concepts and technologies covered in this module, you can simplify authentication and
authorization in your distributed applications integrating with modern identity providers.
Note: The Azure portal UI and Azure dialog boxes in Microsoft Visual Studio 2017 are
updated frequently when new Azure components and SDKs for Microsoft .NET are released.
Therefore, it is possible that some differences will exist between screenshots and steps shown in
this module, and the actual UI you encounter in the Azure portal and in Microsoft Visual Studio
2017.
Objectives
After completing this module, you will be able to:
• Describe the authentication and authorization flows in OpenID Connect, including Server-to-Server
authorization.
• Integrate client applications and authenticate users by using Microsoft Authentication Library (MSAL).
MCT USE ONLY. STUDENT USE PROHIBITED
9-2 Securing Services On-premises and in Microsoft Azure
Lesson 1
Explaining Security Terminology
Before you understand how to implement security in your services, it is important that you understand
why securing services is important and what security features are available to secure web services. This
lesson provides you with an overview of security terminologies.
Lesson Objectives
After completing this lesson, you will be able to:
Symmetric/Asymmetric encryption
In today's network and computer environments,
providing security for data is becoming
mandatory. Encryption is one of the methods for
providing such security and its main intention is to
protect user information that is being transmitted
between a browser and a remote server.
Such information can be passwords, personal
details, payment information, or any information
that is considered private. In addition to
protecting information over the network,
organizations or individuals also protect their
information stored on local computers, servers,
and the mobile devices they own.
The encryption process applies an encryption algorithm, which is a mathematical algorithm that is applied
to the data, by using an encryption key. The encryption process generates encrypted text, which is also
known as ciphertext. This text can be converted back to its original form only by applying the original key.
This process is called decryption.
• Symmetric Encryption. Using the same key to encrypt and decrypt the information
Symmetric encryption
Symmetric encryption is easy, fast to implement, and has been in use for many years. The key can be a
string, a number, or a combination of random letters.
A wide range of symmetric key ciphers is still being used. An example is AES (AES-128/AES-192/AES-256),
which stands for Advanced Encryption Standard used by government agencies to protect their data. Other
examples include Blowfish, RC4, DES, RC5, and RC6.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-3
• Because the same key is used for both encryption and decryption, the key needs to be shared
somehow between the sender and the receiver. This means that if the key is exposed or lost it needs
to be regenerated and distributed again.
• It does not scale very well because each type of application and user requires different keys.
Regenerating and maintaining keys are difficult tasks.
Asymmetric encryption
Asymmetric encryption, which is also called public key cryptography, uses two different keys—public key
and private key—that are linked together mathematically. The public key is, as the name implies, public. It
can be shared and used by anyone who wants to send information. The sender can encrypt data using a
public key.
The receiver uses the private key to decrypt the data. The receiver needs to keep the private key secure.
To prevent brute-force attacks, a private key needs to be complex and long. Many cryptographic
processes use symmetric cryptography to efficiently encrypt the data but asymmetric cryptography to
exchange the key.
In HTTPS, all communication between the client and the server is encrypted.
• Transport Layer Security (TLS), which is a more recent protocol that aims to replace SSL
You can identify whether a site uses SSL encryption through several visual hints. The address will start
with https://, the background color of the address bar may change, a padlock icon may appear near the
address bar, and sometimes an SSL certificate logo of the certificate authority (CA) will appear on the site.
MCT USE ONLY. STUDENT USE PROHIBITED
9-4 Securing Services On-premises and in Microsoft Azure
2. An SSL-handshake is made for any client that connects to the server. During the handshake phase,
the server and the client agree on the algorithm method (SSL/TLS) and its version. Then the server
sends the client a certificate, which is the public key.
3. The client validates the certificate and generates a session key (a third key), which is used for the
session. This key needs to be sent back to the server , it is being encrypted with the server public key.
Only the server that has the private key can decrypt this message and decipher the session key that
was generated.
4. During the session, all messages between the client and the server will be encrypted by using the
symmetric key.
One of the greatest challenges of the digital cyberspace is to identify the entities that interact with the
system. For example, a social network app has to identify its users to allow them access to their private
space in the network and a bank has to identify its customers to allow them access to their bank account.
Digital identity will usually hold one or more attributes associated with it. Identity attributes include
usernames, passwords, email addresses, phone numbers, or any other information provided by the user.
Attributes that are mandatory for verifying the existence of a digital identity in a system are called
credentials.
A credential is a set of attributes that are validated against the system authorization to verify that an
entity is legitimate. For example, a bank account may require its users to identify themselves when
logging into the system by providing three attributes—an email address, a username code, and a personal
password the user has chosen. These three attributes are the credentials required by the bank. Other
systems may require a different set of credentials such as email and phone number. Credentials for a
system may even be in a digital form such as a digital signature or biometrics.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-5
Authentication
Authentication is the process of confirming that
an identity is in fact, who it claims to be.
• Blocking users that did not sign into the system for a long time
On the other hand, some systems allow users to use their credentials for other services (typically a social
network or a service such as a Microsoft account). This allows the authentication process to grant access
to the system based on the third-party service credentials. This is a very popular technique in many
authentication systems today. It helps the users to reduce the number of credentials they need to
memorize to access different services.
A common security enhancement for authentication systems is called two-factor authentication (TFA).
Those systems will require the users to supply different sets of credentials. For example, a user may have
to provide a username and password combination, and then enter a code or a token shared with the user
through email, SMS, or a dedicated app. Such a system will grant access only if the user of the system has
a corresponding device, such as a mobile phone, to which the code is sent. This code will be valid only for
a limited period. This way, only the holder of the physical device can use the code and gain access to the
system.
MCT USE ONLY. STUDENT USE PROHIBITED
9-6 Securing Services On-premises and in Microsoft Azure
Authorization
Allowing access to a computer system by identifying and authenticating users is not enough. Just by
getting authenticated, a user won’t get the rights to perform all the actions in the system. Authorization is
the process that follows authentication. The goal of authorization is to determine what actions a user is
allowed to perform.
For example, a secured computer system may restrict several users from performing sensitive actions in
the system by categorizing them into groups such as administrator, managers, and users. Administrators
may have access to every part of the system. They will be able to change policies and add and remove
users.
Managers may have access only to an area where they can modify data. For example, updating stocks,
prices, and delivery options in a commercial application. Users may have access only to an area where
they can buy products and manage their shopping cart. A user cannot update the price of a product, and
a manager cannot remove users from the system.
The Authorization mechanism helps determine which identity has access to which resource of the
system.
Here is a summary of the differences between authentication and authorization:
• Authentication determines who the user is.
Authentication modes
Single sign-on (SSO) is the concept of authenticating just once and reusing that authentication
information to access multiple services without having to reauthenticate at every sign in. For the SSO
scenario to work, enterprise systems use an IdP to perform the authentication process. The IdP is the
organization that maintains a directory of users and authentication mechanisms.
The organization that hosts the target application is called the application service provider (ASP).
In most IdP systems, authentication is done by sending back a signed token that has the credentials and
trust signature for the requester. Those tokens can be retrieved in several formats such as SAML, oAuth2,
and Open Connect.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-7
In a scenario where a user has an account with the IdP and wants to use an application in the service
provider, several authentication modes can be used. Here are two examples:
• Passive authentication. When the user accesses the service provider resource, such as Salesforce and
Office, the service provider will redirect the request to the federation server, which will contact the IdP
federation server to generate a token. The user will be redirected to the IdP sign-in page. After
successfully signing in, the user will get the token to access the service provider resource.
• Active Authentication. In an active authentication scenario, the client connects to the IdP directly,
receives the token, and then uses the token to authenticate access to the service provider. An
example of active authentication is a mobile device verifying a user's identity continuously by using
its sensors.
Claims-based Authentication
In a typical scenario, an application, such as a web browser or some other client, working on behalf of a
user, asks an STS for a token that contains the claims for this user. The STS authenticates the user so that
the STS can confirm the identity of the user (for example, verifying passwords or validating tickets).
Typically, the request sent to an STS contains a Uniform Resource Identifier (URI) that identifies the
application that the user wants to access. The STS then looks up information about both the user and the
application from a database that maintains account information and other attributes about users and
applications. This can also be accomplished by using the Active Directory service. After the STS finds what
it needs, it generates the token and returns it to the requester.
MCT USE ONLY. STUDENT USE PROHIBITED
9-8 Securing Services On-premises and in Microsoft Azure
Claims, tokens, and STSs are the foundation of claims-based identity. The idea is to let a user present
digital information to an application in a unified manner so that the application can make a decision
about the user that presented the claims-based token. The user will usually get a token about an
application from an STS. After the user gets the token, the client sends it to the application, which is
configured to work with one or more trusted STSs.
To process the token, the application depends on an identity library, which verifies the token’s signature,
so that the application knows which STS issued the token. If the application trusts the STS that issued this
token, it accepts the token’s claims as correct and uses them to decide what the user can do. For example,
if the token contains the user’s role, the application can assume that the user really has the rights and
permissions associated with that role.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-9
Lesson 2
Securing Services with ASP.NET Core Identity
ASP.NET Core Identity is a membership system that adds logon functionality to ASP.NET Core apps. Users
can create an account with the logon information stored in the identity or they can use an external logon
provider. Supported external logon providers include Facebook, Google, Microsoft, and Twitter.
Lesson Objectives
After completing this lesson, you will be able to:
• Explain how to use ASP.NET Core extensibility features to add authentication with social networks.
Authentication Capabilities
The authentication capabilities of ASP.NET Core
Identity help build a membership system around user logons and credentials by defining several logon
techniques such as username and password combinations, OAuth with token authentication, social
network logons, and advanced features such as two-factor authentication and password recovery.
Authorization Capabilities
The authorization capabilities of ASP.NET Core Identity help define:
• Simple authorization. Controls what users logged on to the system can see regardless of their roles or
claims.
• Role-based authorization. Allows access to certain resources based on the user roles. For example, in a
content management system, the system may have a user role with read-only access, an editor role
for modifying the content, and an admin role that can modify system settings and grant access to
different users.
• Claim-based authorization. Restricts the access to the resource for a subset of users that meet certain
criteria such as all users under a certain age and all workers with a particular employee number. This
fine-grained control gives a lot of flexibility to security systems.
MCT USE ONLY. STUDENT USE PROHIBITED
9-10 Securing Services On-premises and in Microsoft Azure
• Policy-based authorization. By defining a policy rule, the system can grant access according to that
rule. For example, giving access to a resource only between 09:00 AM and 06:00 PM.
You can configure ASP.NET Core Identity to use the Microsoft SQL Server database to store users, users’
passwords, claims, and roles, and also manage sign-in sessions. In addition, you can use ASP.NET Core
Identity with your own persistent storage such as MongoDB or Microsoft Azure Table storage.
ASP.NET Core Identity combines authentication and authorization capabilities with the power of Entity
Framework. For a developer, this gives great flexibility and productivity when adding security layers to
web apps.
This creates a default .Net Core MVC project with no authentication capabilities. Therefore, all APIs in this
project are accessed by all users, and no user needs to be logged on.
When you wish to add an authentication layer to your project, you need to specify the --auth flag which
has the following possible values:
• Individual. An individual authentication layer where identity management is done on the website
itself
The command for creating an ASP.NET Core Web API project with individual authentication is:
Creating an ASP.NET Core Web API project with an individual authentication command
dotnet new mvc -o myApiSite --auth Individual --use-local-db
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-11
This command creates a new .NET Core MVC project named myApiSite with an authentication
middleware of individual accounts stored in a LocalDB database, which is managed by Entity Framework.
When not specifying the --use-local-db flag, a default SQLite database will be created. To restore all
dependencies and initialize the database tables you need to run scaffolding.
Scaffolding
ASP.Net Core 2.1 Identity is implemented as part of the Razor Class Library. Because of this, the default
application project template does not include the source code for the identity framework. However,
sometimes it is useful to add scaffolding code that will allow you to modify certain default behaviors. The
following procedure will create the auto-generated classes that are being used by the framework for a
register, logon, and logout scenario.
To enable scaffolding, you first need to install the code generator tool of ASP.Net Core Identity.
Next, you will need to run the code generator for the requested files:
Note: You will need to use the real ApplicationDbContext namespace for your project.
The next step will be to run the database migrations and the seed code that creates the database tables
used for the Identity framework.
The CLI tool will create all the scaffolding classes needed for the Identity System along with some UI
classes that will enable users to perform logon, logout, and registration.
It is important to notice the following code changes made by the creation process:
• The application.settings code contains the connection string for the LocalDB database. This is the
place to plug in another connection string if you decide to work with different storage provider.
• An Area/Identity folder, which contains all the scaffolding code. Among the files that were
generated it is interesting to look at:
• A Startup.cs file which contains all the bootstrap code needed by the identity system.
MCT USE ONLY. STUDENT USE PROHIBITED
9-12 Securing Services On-premises and in Microsoft Azure
By looking at the startup.cs file, you can see the code that was generated by the .NET scaffolding process:
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
The code plugs in the Entity Framework DbContext into ASP.NET Core built-in Dependency Injection (DI)
services. Then, it adds the identity framework.
Finally, the Configure method on the Startup.cs class will enable authentication:
This will enable authentication on the ASP.NET project that we have created.
There are more scaffolding options and to see them all, run the following command:
In the next lesson, we will add users and roles to the system and see how it works.
After the dotnet restore process and the database update process are finished, the following tables will be
created automatically by the Entity Framework seed code:
• AspNetUsers. A table to store all the users, their email addresses, and their hashed-passwords.
• AspNetUserLogin. A table to store all the logon sessions of a user.
• AspNetRoleClaims. For claims-based authentication of roles, this table will hold all the claims IDs
and values for a given role.
• AspNetUserTokens. A table that is used to store user tokens that were authenticated by using an
external OAuth token provider.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-13
Authenticating users
Any Security system involves managing users,
groups, and their credentials and access rights to
different resources. In this topic, you will examine
how authentication and authorization are
achieved in ASP.NET Core and what services can
be used in each scenario.
Register API
The register endpoint allows users to join your website by providing their username and password or any
other authentication such as social network credentials or oAuth tokens.
Here is the register code from the Register.cshtml.cs class that was auto-generated:
public RegisterModel(
UserManager<IdentityUser> userManager,
SignInManager<IdentityUser> signInManager,
ILogger<RegisterModel> logger,
IEmailSender emailSender)
{
_userManager = userManager;
_signInManager = signInManager;
_logger = logger;
_emailSender = emailSender;
}
As with any model class that is part of the identity framework, the class holds several services such as:
All those services are injected by using the built-in Dependency Injection (DI) mechanism of .NET Core
apps.
The registration code inside the class handles the request after the user has filled in the register form and
sent their credentials (username and email address):
Registration code
public async Task<IActionResult> OnPostAsync(string returnUrl = null)
{
returnUrl = returnUrl ?? Url.Content("~/");
if (ModelState.IsValid)
{
var user = new IdentityUser { UserName = Input.Email, Email = Input.Email
};
var result = await _userManager.CreateAsync(user, Input.Password);
if (result.Succeeded)
{
_logger.LogInformation("User created a new account with password.");
return LocalRedirect(returnUrl);
}
foreach (var error in result.Errors)
{
ModelState.AddModelError(string.Empty, error.Description);
}
}
Note: The user’s password will be saved by using the hashing mechanism so that it will be
kept securely in the database.
Note: The code already contains methods for using email confirmation process by using
the EmailSender service, although this is turned off by default.
Note: After a user who has registered successfully is logged on to the system by default.
Login API
The Login API allows users to provide their credentials (username and password) and be authenticated by
the identity middleware. The identity middleware will provide a query against the users table in the
database to verify if there is a match between the given credentials and the credentials stored on the
database. The password the user supplies during this process is hashed with the same algorithm it was
hashed with when the user registered in the system. This way, by applying on-direction hash-function,
only the password hashes are compared, and no user password is saved to the database.
Sign-in code
public async Task<IActionResult> OnPostAsync(string returnUrl = null)
{
returnUrl = returnUrl ?? Url.Content("~/");
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-15
if (ModelState.IsValid)
{
// This doesn't count login failures towards account lockout
// To enable password failures to trigger account lockout, set
lockoutOnFailure: true
var result = await _signInManager.PasswordSignInAsync(Input.Email, Input.Password,
Input.RememberMe, lockoutOnFailure: true);
if (result.Succeeded)
{
_logger.LogInformation("User logged in.");
return LocalRedirect(returnUrl);
}
/// dropped for brevity
return Page();
}
}
You can clearly see that the user logged on using the built-in SignInManager utility class that handles all
the password hashing, checks against the database for that user, and then returns a result whether the
user is authenticated.
Claims-based authorization
You have learned how to add an authentication
layer to your ASP.NET Core app. You have
provided your app with capabilities to register
users, logon, logout, and manage the user
information in a database.
To enable claims, you need to register and build the claim. This is done as part of the ConfigureService
code in the Startup.cs file:
Enabling claims
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
services.AddAuthorization(options =>
{
options.AddPolicy("ManagerOnly", policy => policy.RequireClaim("IsManager"));
});
}
MCT USE ONLY. STUDENT USE PROHIBITED
9-16 Securing Services On-premises and in Microsoft Azure
This will register a new policy called ManagerOnly, which requires the IsManager claim to exist in the
identity that accesses the protected resource. We will look at a simple controller, called ValuesController
in a typical web application or web API application.
Controller definition
[Route("api/[controller]")]
public class ValuesController : Controller
{
[HttpGet]
public IEnumerable<string> Get()
{
return new string[] { "value1", "value2" };
}
}
To authorize the web API only to users with the valid claims, we can add the following attribute in the Get
method:
Authorizing by claim
[Authorize(Policy = "IsManager")]
[HttpGet]
public IEnumerable<string> Get()
{
return new string[] { "value1", "value2" };
}
In this simple type of claims policy, only the presence of the claim, IsManager is enforced, regardless of
its value. Users without this claim type will not be able to access the controller endpoint.
The [Authorize] attribute can be placed at the level of the controller itself in the following manner:
[HttpGet]
public IEnumerable<string> Get()
{
return new string[] { "value1", "value2" };
}
}
If anonymous access is required for a resource while other resources need to have authorization, you can
use the [AllowAnonymous] attribute in the following manner:
[HttpGet]
public IEnumerable<string> Get()
{
return new string[] { "value1", "value2" };
}
[AllowAnonymous]
public ActionResult GetById(int Id)
{
return new string[] { "value3"};
}
}
In this example, the controller enforces the IsManager policy, so that only a user with that claim can
access the APIs, but for the API endpoint, GetById, anonymous access was granted.
To use claims with value, you need to register the claim with its value or value list:
services.AddAuthorization(options =>
{
options.AddPolicy("HrDepartmant", policy =>
policy.RequireClaim("EmployeeId",”100”,”101”,”101”));
});
}
In this example, a policy called HrDepartment registers a claim with a key, EmpId, and possible values of
100, 101, or 102. Therefore, only employees with that claim value will be able to use the protected
resource.
Using policy
[Route("api/[controller]")]
public class ValuesController : Controller
{
[HttpGet]
[Authorize(Policy = "EmployeeId")]
MCT USE ONLY. STUDENT USE PROHIBITED
9-18 Securing Services On-premises and in Microsoft Azure
In this example, only the employees with the corresponding IDS (101,102, or 103) as their claim value will
be authorized for the Get API.
The following providers are supported by the ASP.NET Core Identity system:
• Microsoft
• Facebook
• Google
• Twitter
When configuring a social provider logon, in most cases, creating an app at the provider portal is one of
the preliminary steps. For example, for Twitter, you should create an app at https://apps.twitter.com/.
For Microsoft, you should create an app at https://apps.dev.microsoft.com.
After creating the app at the provider’s portal, you will get app-dedicated API tokens or/and a
combination of AppId and ClientId, which will be used later on as credentials for the authentication
service.
Recall that authentication services and configurations are configured in the ConfigureServices method in
the Startup.cs file.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-19
Your application needs to supply the corresponding tokens received when registering the app at the
social provider portal (Microsoft, in this case). In the configuration API, the ClientId field is mapped to the
Microsoft application ID and the ClientSecret field is mapped to the Microsoft password.
After you have plumbed the corresponding configuration options and run your application, every time a
user logs on to the app, the user will be redirected to Microsoft for authentication. After successful logon,
the user will be redirected back to your app.
The following example shows the Microsoft social network integration at logon.
Configuring a social network provider is not a part of this course. For further information,
follow the instructions at:
https://aka.ms/moc-20487D-m9-pg1
ASP.NET Core Identity allows for easy integration with those services, although it will not actually send the
email/text message by itself. It will require you to provide a service that does the actual sending or use a
third party. To enable the email confirmation capability in your app, perform the following steps:
});
}
}
The next step is to configure an email service. Email service is not part of the identity system and you need
to configure it either by writing an SMTP client mechanism or by using a third-party provider such as
SendGrid. If you use a third-party provider, you will need to create an account and configure access keys.
It is possible to use the built-in System.Net.Mail to send emails. However, it requires more effort and
security measures.
The next step is to write an email sender class that implements the IEmailSender interface and contains
the logic for sending an email.
Now you can plug in the email sender class so that the identity middleware will use it. Add the following
code to Startup.cs:
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
// requires
// using Microsoft.AspNetCore.Identity.UI.Services;
// using WebPWrecover.Services;
services.AddSingleton<IEmailSender, EmailSender>();
services.Configure<AuthMessageSenderOptions>(Configuration);
}
Finally, prevent users from auto-logon after registration. In the previous topics, you learned that after
registration, SignInManager is used to log on users. You need to prevent auto-logon so that only users
who have confirmed their email addresses are allowed to log on.
For more information on how to enable email confirmation, follow the link at:
https://aka.ms/moc-20487D-m9-pg2
• Two-factor authentication with QR code generation. Today, many applications use two-factor
authentication, which increases the level of security by getting the users to identify themselves with at
least two sets of credentials. For example, a user-password combination with an email address or an
SMS confirmation. Another method involves generating a QR code.
• Combine local and social accounts. ASP.NET identity allows users to log on with their social account. If
the social provider service is not available, users will be allowed to log on with their local account.
• Role-based authentication. In role-based authorization, a role, such as Admin is assigned to a user or
to a group. This role can later be used for the authorization process. Similar to claims-based
authorization, the role checks are made declarative by using attributes such as [Authorize(Roles =
"Admin,DBA")].
• Using Different Store Providers. ASP.NET identity system is not just limited to SQL server. It is a
pluggable system that allows developers to use any storage as long as they support the common API
needed for the IdentityStore interface.
Objectives
After you complete this lab, you will be able to:
• Test an ASP.NET Core service with the authentication and authorization process.
Lab Setup
Estimated Time: 20 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD09_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD09_LAK.md.
Lesson 3
Securing Services with Azure AD
Azure AD is Microsoft’s multi-tenant, cloud-based directory and identity management service. Azure AD
combines core directory services, advanced identity governance, and application access management.
Azure AD also offers a rich, standards-based platform that enables developers to deliver access control to
their applications, based on centralized policy and rules.
Lesson Objectives
After completing this lesson, you will be able to:
• Describe the basic authentication and authorization concepts and protocols required for working with
Azure AD.
• Explain how to manage Azure AD.
• Describe the integration of .NET Core server applications with Azure AD.
• Describe Azure AD B2C.
• Describe the integration of ASP.NET Core Web API applications with Azure AD B2C.
• Third-party applications are required to store the resource owner's credentials for future use, typically
a password in clear-text.
• Servers are required to support password authentication, despite the security weaknesses inherent in
passwords.
• Third-party applications gain overly broad access to the resource owner's protected resources, leaving
resource owners without any ability to restrict duration or access to a limited subset of resources.
• Resource owners cannot revoke access to an individual third party without revoking access to all third
parties and must do so by changing the third party's password.
MCT USE ONLY. STUDENT USE PROHIBITED
9-24 Securing Services On-premises and in Microsoft Azure
• Compromise of any third-party application results in compromise of the user's password and all the
data protected by that password.
The solution for these problems and limitations is to force a separation between the client (web browser)
and the resource owner (user). This is done by separating the roles of the resource server and the
authorization server. The resource owner uses their credentials to authenticate against the authentication
server, which in turn provides an access token that is then used instead of the original resource owner’s
credentials. The access token represents an authorization issued to the client.
It is possible to put the solution described above into the following abstract flow:
1. The client authenticates with the authorization server and is issued an authorization grant.
2. Using the received authorization grant, the client proceeds to request an access token from an
authorization server.
3. Given that the authorization grant received from the client is valid, the authorization server generates
an access token and returns it to the client.
4. With the access token at hand, the client then sends the access token to the resource server to access
a protected resource.
5. The resource server uses the access token to access all kinds of information about the user. With the
user’s information at hand, the resource server can now decide whether to authorize access to the
requested protected resource or not. If the resource server decides to authorize access to the
resource, the protected resource is served back to the client.
The flow presented above is an abstract flow. It is abstract in the sense that the authorization grant isn’t
defined. OAuth 2.0 has various flows, which are based on the abstract flow. Each flow brings its own
implementation for the authorization grant. The most important flows that you will learn about in this
course are the Authorization Code grant flow and the Implicit grant flow. To learn about other flows that
are specified in the OAuth 2.0 specification, refer to the OAuth 2.0 website. To learn more about OAuth
2.0, refer to the documentation at the following link:
OAuth 2.0
http://go.microsoft.com/fwlink/p/?linkid=214783
The Authorization Code flow is based on the basic OAuth flow and works as follows:
1. As mentioned in the abstract flow, the starting point is after the user is already authenticated. The
client redirects the resource owner’s user-agent to the authorization endpoint (this typically exists on
the authorization server). The client includes the following parameters in the redirection URL:
a. response_type. The value here is always code when using the Authorization Code flow.
c. scope. The scope of the access request is a list of resources the client will be allowed to access. An
example is using a scope parameter to give the client access to the user’s profile.
d. redirect_uri. After the resource owner has either approved or denied the authorization request,
this is the URI to which the authorization server will redirect the resource owner’s user-agent. This
is usually a URI on the client application.
e. state. This is data shared between the authorization request and the callback invoked by the
supplied redirection URI. A good example would be to supply an application redirection URL so
the user can return to the same page he tried to access before being redirected to the
authorization server.
GET /authorize?response_type=code&client_id=s6BhdRkqt3&state=xyz
&redirect_uri=https://client.example.com/cb
2. On receiving the authorization request, the authorization server prompts the resource owner to
approve or deny the access request.
3. If the resource owner approves the request, the authorization server redirects the resource owner’s
user-agent to the URL specified in the redirect_uri parameter that was supplied as part of the
authorization request. As part of the redirection, the authorization server includes the authorization
code in the URL and any state provided in the authorization request.
4. Using the authorization code, the client requests an access token from the token endpoint (usually
another endpoint on the authorization server), the request consists of the following parameters:
a. grant_type. authorization_code
b. code. The authorization code received in step 3.
The client includes the access token in the HTTP authorization header for every request performed against
the resource server. The resource server validates the access token and if valid, it returns the protected
resource.
Note: The Authorization Code flow is good only if the client doesn’t expose the
authorization process. This is why in browser-based client applications, the Authorization Code
flow doesn’t bring enough benefits and only complicates the authorization process.
MCT USE ONLY. STUDENT USE PROHIBITED
9-26 Securing Services On-premises and in Microsoft Azure
o scope. A scope of the access request. This is a list of resources the client will be allowed to access.
For example, using a scope parameter to give the client access to the user’s profile.
o redirect_uri. After the resource owner has either approved or denied the authorization request,
this is the URI to which the authorization server will redirect the resource owner’s user-agent. This
is usually a URI on the client application.
o state. This is data shared between the authorization request and the callback invoked by the
supplied redirection URI. A good example would be to supply an application redirection URI so
the user can return to the same page he tried to access before being redirected to the
authorization server.
2. Example Authorization Code request URL:
GET
/authorize?response_type=token&client_id=s6BhdRkqt3&state=xyz&redirect_uri=https://client.example.c
om/cb
3. Upon receiving the authorization request, the authorization server prompts the resource owner to
approve or deny the access request.
4. If the resource owner approves the request, the authorization server redirects the resource owner’s
user-agent to the URL specified in the redirect_uri parameter that was supplied as part of the
authorization request. As part of the redirection, the authorization server includes the access token in
the URL and any state provided in the authorization request.
5. The client includes the access token in the HTTP authorization header for every request performed
against the resource server. The resource server validates the access token, and if valid, it returns the
protected resource.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-27
Note: In the next lesson, “Azure Active Directory B2C,” you will learn about the OAuth 2.0
Implicit flow and its OpenID Connect version.
1. The client prepares an authentication request, which is very similar to the OAuth 2.0 authorization
request.
5. The authorization server sends the user back to the client with an authorization code.
6. The client sends a request containing the authorization code to the token endpoint.
8. The client validates the ID token and uses it as a source of the user’s information.
9. From here, the flow continues in a similar fashion to the OAuth 2.0 Authorization Code flow. The
client may send the access token to the resource server to gain access to a protected resource.
MCT USE ONLY. STUDENT USE PROHIBITED
9-28 Securing Services On-premises and in Microsoft Azure
Client Libraries
Of the four client libraries, OpenID Connect has three main client libraries that are relevant to this course:
• ASP.NET Core Authentication middleware. This is the middleware for .NET Core-based server
applications that need to use an external identity provider.
• OWIN OpenID Connect middleware. This is the middleware for .NET Framework-based server
applications that need to use an external identity provider.
Interfacing with so many identity providers is not an easy task. Each provider can use a different protocol
and expose different claims. For this task, it is best to use an existing infrastructure (if available) rather
than trying to implement such an abstraction yourself. Azure offers exactly such an infrastructure, Azure
AD, and it is the main topic of this module and this lesson.
• Azure AD B2C service. It covers user identity use cases. It is intended for customer-facing
authentication and authorization scenarios.
• Azure AD B2B collaboration. It covers B2B scenarios, such as providing access to partners into
organizational assets.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-29
This course covers the core Azure AD service and Azure AD B2C. Azure AD B2B is intentionally left out, but
you can read more about it by going to the following page:
https://aka.ms/moc-20487c-m11-pg4
As mentioned above, Azure AD is an identity provider and it supports adding and removing users. It also
exposes authentication- and authorization-related endpoints and exposes user data as tokens that contain
claims. Azure AD is also a directory service and includes the concepts of groups, memberships, role
management, and other directory features.
Azure AD exposes identities through different authentication or authorization protocols. The protocols
that this course focuses on are OAuth 2.0 and OpenID Connect 1.0, and identity data transfer over JWTs.
Note: Aside from OAuth 2.0 and OpenID Connect 1.0, Azure AD also supports WS-
Federation and identity data transfer over SAML tokens, both of which are not covered in this
course, but you can read about them by going to the following page:
https://aka.ms/moc-20487c-m11-pg5
Azure AD, as mentioned above, is both an identity provider and a directory service, and so it manages
users and provides fine-grained access to different Azure resources. Every Azure AD tenant is linked to a
subscription and using that link, Azure AD can provide access management services for resources
associated with that subscription.
Managing users
With Azure AD, you can easily create and remove users, edit users’ details, and reset passwords. You can
also invite users by email. This means that you have complete control of user management. Users cannot
just sign up to your directory and have access to resources.
From the description above, you can infer that the typical use case for Azure AD is to manage
organizational users. For example, a company would have an Azure AD tenant and the R&D department
would have users associated with that Azure AD tenant.
MCT USE ONLY. STUDENT USE PROHIBITED
9-30 Securing Services On-premises and in Microsoft Azure
1. Open a browser, go to the Azure portal, and then, in the navigation pane, click Azure Active
Directory.
2. On the Azure Active Directory blade, click Users. This will lead you to the users management dialog.
3. On the Users blade, you can see a list of existing users (this will probably show only one user,
yourself). You can also create a new user or invite another user.
• Direct assignment. Users are assigned directly to an Azure resource (for example, to an app service).
• Group membership. The group is assigned to a resource and users that are members of this group
have access to the resource.
• Rule based. This is a special case of group membership. For example, all users where the department
matches R&D.
• External authority. Access to resources is controlled by an external authority. For example, access is
controlled by using data from an on-premises active directory instance.
Directory access is different from application-level access (application authorization). Directory access
means that while navigating through the portal, the directory access services will dictate which user can
access what resources and to what degree that user can manage resources. On the other hand,
application-level access controls the access to application-level resources, such as specific pages or
endpoints within the application.
This course covers group memberships because groups can be used as claims in ASP.NET applications,
and you can even use groups instead of roles to control access to specific application-level resources.
To learn more about other ways to control access by using Azure AD, go to the following page:
Azure AD Documentation – How to manage groups and members
https://aka.ms/moc-20487D-m9-pg5
Groups can be added through the Azure portal and after adding a group, users can be assigned as
members to the group. A user can be a member of multiple groups. Groups can be members of other
groups.
To manage group and memberships:
1. Open a browser, navigate to the Azure portal, and then, in the navigation pane, click Azure Active
Directory.
3. On the Groups blade, you can create a new group, view the details of existing groups, and manage
group memberships.
Demonstration Steps
You will find the steps in the “Demonstration: Creating an Azure Active Directory and Users” section on
the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/blob/master/Instructions/20487D_MOD09_DEMO.md
Azure AD Applications
Azure AD has a concept of applications. An Azure
AD application is a contract between an
application developed by a company and Azure
AD. Azure AD can be integrated with any kind of
application by using Azure AD applications.
• Application Type. Select Native for client applications that are installed locally on a device. This
setting is used for OAuth public native clients. Select Web app / API for client
applications and resources or API applications that are installed on a secure server. This setting is used
for OAuth confidential web clients and public user-agent-based clients. The same application can also
expose both a client and resource or API.
• Sign-on URL. For web apps and API apps, provide the base URL of your app. For
example, http://localhost:31544 might be the URL for a web app running on your local machine.
Users would use this URL to sign in to a web client application.
• Redirect URI. For native applications, provide the URI used by Azure AD to return token responses.
Enter a value specific to your application; for example, http://MyFirstAADApp.
Apart from the attributes defined above, an Azure AD application also has a set of permissions; for
example, permissions to read directory data.
To connect any application to Azure AD, you need an Azure AD application. The application has to be
configured correctly both on Azure AD and on the ASP.NET application.
Configuration setup
Depending on the type of application you are building, you may or may not need to include
authentication in the application. If you are building an MVC application, you will probably need to
include a way for the user to authenticate. If you are building a set of REST APIs, the application will only
expect a valid token, meaning that the authentication part isn’t the responsibility of the application.
For simplicity, let’s assume that no authentication is required. In that case, ASP.NET Core provides an
authentication scheme called AzureADBearer. This scheme is different from the old-fashioned user-
password authentication. It requires a JWT to be provided in the authorization HTTP header. When
MCT USE ONLY. STUDENT USE PROHIBITED
9-32 Securing Services On-premises and in Microsoft Azure
ASP.NET Core receives a JWT, it validates the token against the identity provider (Azure AD) and if valid,
the user details are read from the JWT into a new ClaimsIdentity instance.
Configuration Entries
To use the AzureADBearer authentication scheme, ASP.NET Core needs to have the following parameters
defined:
namespace WebApplication13
{
public class AuthPropertiesTokenCache : TokenCache
{
private const string TokenCacheKey = ".TokenCache";
BeforeAccessNotificationWithProperties(args);
}
{
AfterAccessNotificationWithProperties(args);
context.HandleCodeRedemption(result.AccessToken, result.IdToken);
},
OnAuthenticationFailed = c =>
{
c.HandleResponse();
c.Response.StatusCode = 500;
c.Response.ContentType = "text/plain";
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-35
return c.Response.WriteAsync(c.Exception.ToString());
}
};
});
}
Add the authentication middleware and implement the sign-in and sign-out handlers.
Add middleware and implement sign-in and sign-out handlers
Authentication implementation
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseMvc();
app.UseAuthentication();
app.Run(async context =>
{
if (context.Request.Path.Equals("/signin"))
{
if (context.User.Identities.Any(identity => identity.IsAuthenticated))
{
// User has already signed in
context.Response.Redirect("/");
return;
}
await response.WriteAsync("<h2>Claims:</h2>");
await WriteTableHeader(response, new string[] { "Claim Type", "Value" },
context.User.Claims.Select(c => new string[] { c.Type, c.Value }));
await response.WriteAsync("<h2>Tokens:</h2>");
try
{
// Use ADAL to get the right token
var authContext = new AuthenticationContext(Authority,
AuthPropertiesTokenCache.ForApiCalls(context,
CookieAuthenticationDefaults.AuthenticationScheme));
var credential = new ClientCredential(ClientId, ClientSecret);
string userObjectID =
context.User.FindFirst("http://schemas.microsoft.com/identity/claims/objectidentifier").V
alue;
var result = await authContext.AcquireTokenSilentAsync(Resource,
credential, new UserIdentifier(userObjectID, UserIdentifierType.UniqueId));
await
response.WriteAsync($"<h3>access_token</h3><code>{HtmlEncode(result.AccessToken)}</code><
br>");
}
catch (Exception ex)
{
await response.WriteAsync($"AquireToken error: {ex.Message}");
}
});
}
});
}
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-37
response.ContentType = "text/html";
await response.WriteAsync($"<html><head>{bootstrap}</head><body><div
class=\"container\">");
await writeContent(response);
await response.WriteAsync("</div></body></html>");
}
Demonstration Steps
You will find the steps in the “Demonstration: Securing an ASP.NET Core application using OpenID
Connect and AAD“ section on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD09_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
9-38 Securing Services On-premises and in Microsoft Azure
• Enterprise accounts that use open standard protocols, OpenID Connect, or SAML.
• Local accounts, which include an email address and password or username and password.
Integrating enterprise accounts is supported by an advanced feature called identity experience framework
and is not covered in this module. You can learn more about it on the following page:
2. In the Create new B2C Tenant or Link to existing Tenant blade, choose Create a new Azure AD
B2C Tenant.
3. Enter the required details and create the tenant. It should take about a minute.
4. Eventually, you will be presented with an information box. Click the link to go to the new Azure AD
B2C tenant.
As stated earlier in the module, Microsoft has two related entities—subscriptions and tenants. A tenant
without an active subscription is inactive. When a new Azure AD B2C tenant is created, it is not linked to
any subscription and needs to be manually linked to a subscription.
To link an Azure AD B2C tenant to a subscription:
1. Make sure you are on the primary Azure AD tenant, you can verify this by opening the account menu
on the top right of the portal.
3. In the Create new B2C Tenant or Link to existing Tenant blade, choose Link an existing Azure
AD B2C Tenant to my Azure Subscription.
4. In the Azure AD B2C Resource, select Azure AD B2C Tenant, the subscription you want to link to,
fill in the resource group, and then create the link.
After linking an Azure AD B2C tenant to an active subscription, the tenant should become active as well.
Azure AD B2C has many capabilities, out of which this module covers the following:
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-39
• Managing users
Identity Providers
While Azure AD B2C is an identity provider, it is
not the only provider. In the modern world of application development, it is extremely common to visit a
web application that allows you to sign in using your Google, Facebook, or Microsoft accounts. The
reason it is so appealing is that it doesn’t require the user to remember yet another password, all it
requires is one click and the user is logged in to the system.
For the application developer, setting up an identity provider is a relatively short process. It starts with
setting up an application on one of the identity providers and then connecting those providers to Azure
AD B2C by adding them on the identity providers blade.
The identity providers blade let you add different providers such as Google, Facebook or Microsoft.
MCT USE ONLY. STUDENT USE PROHIBITED
9-40 Securing Services On-premises and in Microsoft Azure
For the sake of simplicity, this module will only use Azure AD B2C as an identity provider, but you can
read about social providers on the pages listed below.
User Attributes
Azure AD B2C ships with a readymade set of attributes that can be used as part of the user’s profile. These
attributes are available to applications as claims and the user can be asked to fill them during sign-up.
• Given name
• City
• Country/Region
• Display name
• User is new
Note: As of April 2018, Azure AD B2C ships with 13 built-in user attributes.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-41
In addition to the built-in claims, it is possible to define additional attributes. On the Azure AD B2C
blade, go to the User Attributes blade. You should see a list of existing attributes.
To add a new attribute:
1. Click Add.
2. Enter a name for the new attribute. The name can include only alphanumeric characters and
underscore. The name cannot start with a number.
3. Choose a data type and enter a description.
4. Click Create.
Note: As of April 2018, Azure AD B2C supports three data types of custom attributes:
• String
• Boolean
• Int
Add the
Microsoft.AspNetCore.Authentication.JwtBear
er NuGet package.
To the application settings, add the tenant name and the policy name that you created in Azure.
For more information about using Azure AD B2C in the ASP.NET Core application, go to the
following URL:
https://aka.ms/moc-20487D-m9-pg7
Demonstration Steps
You will find the steps in the “Demonstration: Using AAD B2C with ASP.NET Core“ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD09_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
9-44 Securing Services On-premises and in Microsoft Azure
Objectives
After you complete this lab, you will be able to:
Lab Setup
Estimated Time: 20 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD09_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD09_LAK.md.
You learned about the latest recommended industry grade authorization and authentication protocols—
OAuth 2.0 and OpenID Connect. You know your way around Azure AD and can integrate organizational
ASP.NET applications with Azure AD.
Finally, you learned about Azure AD B2C and how to provide a good authentication experience for users.
You can integrate UWP applications with both Azure AD B2C and a secured ASP.NET Web API application.
Best Practices
• Use OpenID Connect to secure your applications, both client and server.
• Use well-known identity providers such as Azure AD, Azure AD B2C, Google, and Amazon.
• Use the OAuth 2.0 authorization code grant only for trusted applications and use the implicit flow for
any untrusted applications such as user-facing applications.
Review Question
Question: What are the advantages of using claims-based identity?
Tools
• Microsoft Visual Studio 2017
• Microsoft Azure portal
MCT USE ONLY. STUDENT USE PROHIBITED
9-46 Securing Services On-premises and in Microsoft Azure
MCT USE ONLY. STUDENT USE PROHIBITED
10-1
Module 10
Scaling Services
Contents:
Module Overview 10-1
Module Overview
Services that are successful in providing business value are likely to experience growth in the number of
users and the amount of data that they need to handle. Developers should know how to make sure that
their services can handle the increasing workload while still maintaining a high level of performance and
good user experience. You will learn about the need for scalable services and how to handle increasing
workloads by using load balancing and distributed caching.
You will learn about scaling services in cloud deployments, along with the challenges that such services
face while they are growing.
Note: The Microsoft Azure portal user interface (UI) and Azure dialog boxes in Microsoft
Visual Studio 2017 are updated frequently when new Azure components and SDKs for Microsoft
.NET are released. Therefore, it is possible that some differences will exist between screenshots
and steps shown in this module, and the actual UI you encounter in the Azure portal and Visual
Studio.
Objectives
After completing this module, you will be able to:
• Explain Azure Load Balancer, Azure Application Gateway, and Azure Traffic Manager.
MCT USE ONLY. STUDENT USE PROHIBITED
10-2 Scaling Services
Lesson 1
Introduction to Scalability
Scalability is a critical aspect of any service-oriented software. It has a direct impact on how users view the
reliability and trustworthiness of a service and therefore has a bearing on the business.
Load balancing is a technique that enables applications to scale and be more resilient to failure. For large-
scale, distributed applications, this is an extremely important issue.
In this lesson, you will be introduced to the two approaches for scaling large applications and understand
the required components. And also, you will learn about the different ways in which you can perform load
balancing and how to load-balance your Azure application.
Lesson Objectives
After completing this lesson, you will be able to:
• Describe the reasons that make scalability important.
A scalable system can handle such peaks and spikes in demand without any degradation in the service
quality experienced by customers. This is very important from a business perspective because it has a
direct impact on how customers perceive the reliability and trustworthiness of the service.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 10-3
Scaling Approaches
There are two different approaches for scaling
services:
Scaling Out
To scale out, you add additional nodes to an
existing system. With the increased computing
power and decreased cost of ‘commodity’ hardware, which is hardware that is easily available to
consumers, adding more processing and storage capacity to a distributed application is a very simple
undertaking. Modern distributed applications often run on large clusters of low-cost computers that are
interconnected into a single cluster. Such applications need to be aware of the fact that they run in a
clustered environment.
Scaling Up
To scale up, you add additional resources (processing or storage) to a single node of the system. This is
often the easiest option to apply but has inherent limitations such as the maximal memory capacity or the
number of network cards that can be installed on a single computer. At this point, there is no choice but
to replace the node with a better and more capable node. Scaling up might also require the application to
scale up along with the hardware—for instance, the application must be able to take advantage of
multiple cores in a single CPU.
• Shared configuration. You can use a shared configuration to store and administer configuration
settings in a single location. This location can then be used to automatically configure server software
on multiple nodes, such as Microsoft Internet Information Services (IIS).
• Centralized SSL Certificate Support. Centralized SSL Certificate Support, a new feature in IIS 8, allows
you to store SSL certificates in a single location. Multiple IIS-hosting nodes can then use this location
for gaining access to the certificates. This helps the administration secure distributed applications
much more easily than was previously possible.
The Windows Server range of operating systems support load balancing by using Network Load Balancing
(NLB). This is done by combining two or more computers with the same server software, such as IIS, into a
single cluster, which can then be accessed by using its own IP address, but still maintains the IP addresses
of the individual machines. The amount of traffic that each individual computer can handle is known as
load weight and you can configure the load weight for each computer. You can dynamically add or
remove machines to and from the cluster.
DNS Round Robin is an additional method of load balancing that does not require dedicated software or
hardware. When clients make calls to some domains such as www.blueyonder.com, the domain name is
resolved into a numerical IP address by using a Domain Name System (DNS) server. When using DNS
Round Robin, the DNS server resolves the domain name into a different IP address for each individual
request. The major disadvantage of this technique is that it makes the clients aware of the existence of
multiple machines.
You can also implement load balancing by using Microsoft Web Farm Framework for IIS. The Microsoft
Web Farm Framework provides load balancing, scaling, management, and provisioning solutions for IIS-
based web farms. The Microsoft Web Farm Framework also supports application-related solutions, such as
connection stickiness, and central output caching.
For additional information about the Microsoft Web Farm Framework, refer to the IIS documentation.
• Hash-based distribution mode. In this mode, Load Balancer computes a hash based on the 5-tuple
that includes the source IP, source port, destination IP, destination port, and protocol type (TCP or
UDP). Packets that have the same 5-tuple are routed to the same endpoint. This guarantees that
packets belonging to the same TCP session will be handled by the same endpoint. However, if a client
creates multiple TCP sessions, the source port may change and Load Balancer may direct the traffic to
a different endpoint. This can happen when the client issues multiple HTTP requests to the same
service.
• Source IP affinity distribution mode. In this mode—a 2-tuple that includes the source IP and
destination IP, or a 3-tuple that includes the source IP, destination IP, and protocol type are used to
map traffic to the available endpoints. When using source IP affinity, packets from the same client IP
will always go to the same endpoint if they are directed to the same destination IP.
Load Balancer has additional useful features for traffic management. It can monitor the health of your
services by probing their endpoints with HTTP or TCP requests, can forward or block specific ports or
remap ports exposed externally to different ports, and more.
Application Gateway is a Layer 7 (application) load balancer. In addition to pure HTTP load balancing,
Application Gateway supports SSL termination, URL-based routing, web application firewall, and more.
Application Gateway does not support arbitrary protocols. It works with HTTP, HTTPS, and WebSockets
traffic only.
With Application Gateway, you can route traffic to endpoints by using the following strategies:
• Round-robin routing. In this mode, each request will be routed to another instance of your service,
chosen in a round-robin fashion (for example, with three instances: the first request to instance 1, the
second request to instance 2, the third request to instance 3, the fourth request back to instance 1,
and so on).
• URL-based routing. In this mode, you can inspect the URL path components to determine which
endpoint will receive the traffic.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 10-7
You can control the load balancer’s affinity (stickiness) by using an HTTP cookie. The first response sent by
Application Gateway to a specific client can contain an HTTP cookie, which will be sent on subsequent
requests from the same client session. Application Gateway can then use this cookie to route subsequent
requests from the same session to the same service endpoint.
Application Gateway offers numerous additional features, including web application firewall (protection
against common attacks like cross-site scripting and SQL injection), SSL offloading, automatic health
monitoring, HTTP to HTTPS redirection, and more. Furthermore, you can use Application Gateway to
route traffic to non-Azure services. Any public Internet IP address can be an endpoint serviced by an
Application Gateway load balancer. As a result, you can mix on-premises service instances and Azure-
hosted service instances behind the same load balancer, achieving additional flexibility and robustness to
failure.
Note that in a load-balanced scenario, each request may arrive at a different instance. This means that any
common data, such as session information in an ASP.NET application, needs to be accessible to all
instances. You can use a database or a distributed cache for this purpose.
Another approach for load balancing is through the use of message queues: either Azure Service Bus
queues or Azure Queue storage. In this scenario, you bring up multiple worker roles that read from a
single queue. Because each instance reads a single message, the processing load is distributed across
those workers. For more details on queues, refer to module 7, "Microsoft Azure Service Bus" and module
9, "Microsoft Azure Storage."
Demonstration Steps
You will find the steps in the “Demonstration: Scaling Out with Microsoft Azure Web Apps“ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD10_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
10-8 Scaling Services
Objectives
After you complete this lab, you will be able to:
Lab Setup
Estimated Time: 20 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD10_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD10_LAK.md.
Lesson 2
Automatic Scaling
Automatic scaling is a critical aspect of any service-oriented software. It has a direct impact on developers
and user experience.
Lesson Objectives
After completing this lesson, you will be able to:
• Metric name. The name of the metric that will be monitored and used for the criteria. Use metrics
according to your application’s needs. Available options are CPU percentage, memory percentage,
disk queue length, Http queue length, data in, and data out.
• Time grain statistics. These are used to reduce the noise in the metric aggregating the metric at each
minute. Available options are average, minimum, maximum, and sum.
• Operator. Used to compare the metrics. Available options are: greater than, greater than or equal to,
less than, less than or equal to, equal to, not equal to.
• Threshold. This is the numeric value to compare to. In case of percentage, the threshold will be
between 1-100.
• Duration. The amount of time in minutes to calculate the metric value by using the time aggregation
function.
MCT USE ONLY. STUDENT USE PROHIBITED
10-10 Scaling Services
• Operation. This defines how to increase or decrease the instance count. Available options are:
increase count by, increase percent by, increase count to, decrease count by, decrease percent by,
decrease count to.
• Cooldown. The amount of time to wait after a scale operation, before scaling again. For example, if
cooldown is 10 minutes and a scale operation just occurred, auto scale will not attempt to scale again
for 10 minutes. This is to allow the metrics to stabilize before scaling again.
• Offload resource-intensive tasks. To minimize the load on the servers that handle user requests, the
tasks that consume a lot of CPU or I/O resources. should be moved to background jobs when possible
• Design for scale in. When instances get removed, the application must terminate gracefully. Here are
some things that need to be handled carefully:
o Consuming a service should handle errors and use retry policy in case of error.
o For long-running tasks, consider breaking up the work and using checkpoints.
o If an instance is removed in the middle of processing, use queues so that the work can be rerun
on another instance.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 10-11
Fill in the name of this setting and configure the instance limits to a minimum of 1 instance and a
maximum of 5 instances, and then save the settings.
MCT USE ONLY. STUDENT USE PROHIBITED
10-14 Scaling Services
Demonstration Steps
You will find the steps in the “Demonstration: Configuring Automatic Scaling for Azure Web Apps“ section
on the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/blob/master/Instructions/20487D_MOD10_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 10-15
Lesson 3
Application Gateway and Traffic Manager
Scalable applications consist of multiple compute nodes, which are often distributed in multiple cloud
regions or a hybrid environment combined with on-premises nodes. Balancing traffic to these compute
nodes requires performing a set of repetitive tasks, such as determining node health and removing it from
the pool of potential nodes, routing traffic based on the client’s location, and protecting back-end nodes
from various security attacks. Earlier in this course, we discussed some of the benefits of having an API
management layer (such as Azure API Management) in front of your web service, or a general reverse
proxy (such as IIS or NGINX). In this lesson, we will discuss additional Azure services for performing global
and local load balancing of your compute nodes.
In this lesson, we will discuss Application Gateway and Traffic Manager, the two Azure services for load
balancing scalable services. By using Traffic Manager, you can distribute your service across multiple
geographic regions and route traffic according to the user’s location. By using Application Gateway, you
can perform sophisticated load balancing of HTTP, WebSockets, and HTTP/2 traffic.
Lesson Objectives
After completing this lesson, you will be able to:
• Explain the capabilities of Application Gateway and the benefits of using it.
Application Gateway is a load balancer for web services and applications that operate on layer 7 of the
OSI networking model. Rather than looking at HTTP traffic on the transport layer as plain TCP packets,
Application Gateway inherently understands HTTP requests and can route traffic based on the request
URL. For example, requests to /videos will be serviced by a pool of machines different from the rest of
your application’s traffic. Application Gateway also supports the WebSockets and HTTP/2 protocols, in
addition to HTTP (over plaintext or TLS).
In addition to providing load balancing services, Application Gateway offers some additional features:
• SSL termination. You can offload the costly decryption and encryption work from your backend web
servers and perform SSL (TLS) processing on the gateway.
• Request redirection. You can redirect specific requests to other hosts, or, very commonly, redirect all
HTTP (insecure) traffic to HTTPS.
• Web application firewall (WAF). You can protect your backend web servers from common web
application attacks by having the gateway block them.
• Reliability. You can protect your service from failures and downtime by having the gateway
automatically detect unhealthy nodes and move traffic to the healthy ones.
For more information about the types of attacks detected and mitigated by Application
Gateway’s optional Web Application Firewall, go to:
https://aka.ms/moc-20487D-m10-pg5
Application Gateway can route traffic to a number of nodes in a backend pool. Each gateway can handle
multiple pools and route traffic to nodes within the pools based on criteria that you specify. You can mix
and match numerous types of nodes, including:
This ability to mix and match nodes that are hosted in different environments helps you implement
various hybrid solutions with very high degrees of reliability. For example, you can have a backup node
running in a different region, or even a different cloud provider, which will be used in case of a disaster
that affects your primary nodes. Or, you could route most of your traffic to a simple service hosted in
Azure App Service, but route more computationally-expensive requests (such as video or image encoding)
to a dedicated pool of powerful virtual machines.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 10-17
• Routing rules and URL path maps. Rules that specify how to route traffic to nodes.
• Probes. Health checks that automatically remove unhealthy nodes from rotation.
• HTTP listener. The actual component that routes traffic to an address pool.
• Virtual network and subnet. A set of IP addresses to host the gateway instances.
In a nutshell, when an HTTP request arrives at the gateway’s front-end IP address and port (such as
52.178.32.29:8080), a listener picks up the request and evaluates routing rules and path maps to
determine which address pool should process the request. For example, there can be a rule specifying that
requests with URLs containing /videos should be routed to a special backend pool. Unhealthy nodes are
removed from the pool using probes, so the listener can send the request to one of the healthy nodes,
wait for a response, and then send the response back to the client.
You can use the Azure portal, Azure PowerShell, or the Azure Command-Line Interface (CLI) to create and
configure Application Gateway. However, keep in mind that the Azure portal provides support only for
the simpler scenarios. For example, to add a web service hosted in Azure App Service to an Application
Gateway backend pool, you will need to use Azure PowerShell or the Azure CLI.
For more information about Application Gateway, including reference documentation for the
Azure CLI and PowerShell, go to:
https://aka.ms/moc-20487D-m10-pg6
The following screenshot shows the configuration page for creating a new application gateway. Note that
creating a gateway with only one instance means you are not covered by the Azure Service Level
Agreement (SLA).
MCT USE ONLY. STUDENT USE PROHIBITED
10-18 Scaling Services
The following screenshot shows the end of the application gateway configuration process after a subnet
has been configured and a public IP address selected:
The following screenshot illustrates how to configure the backend pool for your application gateway by
adding either Azure Virtual Machines or IP addresses to the pool:
The following screenshot illustrates a fully-configured Application Gateway instance with a public IP
address, as shown on the Azure portal:
There are three steps in the preceding listing. The first updates the HTTP settings for the gateway to
retrieve the hostname from the backend pool. The second creates a new health probe and configures it to
use the hostname settings. The third command associates the health probe with the gateway’s HTTP
settings.
For more information on using Application Gateway with web applications and services
hosted in Azure App Service, go to:
https://aka.ms/moc-20487D-m10-pg7
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 10-21
Demonstration Steps
You will find the steps in the “Demonstration: Using an Azure Web App Behind Azure Application
Gateway“ section on the following page: https://github.com/MicrosoftLearning/20487D-Developing-
Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD10_DEMO.md.
• Priority. Traffic Manager uses the primary endpoint if it is available. If the primary point fails, it will use
the secondary (backup) endpoints. You can configure the health checks that Traffic Manager will use
to determine if your service is healthy.
• Weighted. Traffic Manager will distribute traffic across all endpoints based on the weights you specify.
For example, you can route 20 percent of the traffic to one endpoint and 80 percent of the traffic to
another endpoint.
• Geographic. Traffic Manager can direct traffic to specific endpoints based on the client’s geographic
location. For example, users from the Russian Federation can be directed to on-premises servers in
the Russian Federation (to comply with local regulations), while other users from Europe and Asia will
be directed to Azure-hosted servers.
MCT USE ONLY. STUDENT USE PROHIBITED
10-22 Scaling Services
The following screenshot illustrates the Azure portal dialog for creating a new Traffic Manager profile:
The following screenshot illustrates the dialog for adding a new endpoint to a Traffic Manager profile:
Note: To configure a web application or service running in Azure App Service to use Traffic
Manager, you need to use the Standard SKU. Otherwise, Traffic Manager will not route traffic to
your service.
Demonstration Steps
You will find the steps in the “Demonstration: Using Traffic Manager With an Azure Web App in Multiple
Regions“ section on the following page: https://github.com/MicrosoftLearning/20487D-Developing-
Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD10_DEMO.md.
Manager, and you get the convenience of balancing and routing your application traffic by using
Application Gateway.
The following table summarizes the key differences between Load Balancer, Application Gateway, and
Traffic Manager. They support different protocols, operate at different layers, and allow different types of
endpoints in the backend pool.
For more information on the various Azure load balancing services (Traffic Manager,
Application Gateway, and Load Balancer), and a sample architecture case study that exhibits
use cases for all three of them, go to: https://aka.ms/moc-20487D-m10-pg9
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 10-25
Objectives
After you complete this lab, you will be able to:
Lab Setup
Estimated Time: 15 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD10_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD10_LAK.md.
Course Evaluation
Your evaluation of this course will help Microsoft
understand the quality of your learning
experience.
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
MCT USE ONLY. STUDENT USE PROHIBITED
Notes