You are on page 1of 400

MCT USE ONLY.

STUDENT USE PROHIBITED


O F F I C I A L M I C R O S O F T L E A R N I N G P R O D U C T

20487D
Developing Microsoft Azure™ and Web
Services
MCT USE ONLY. STUDENT USE PROHIBITED
ii Developing Microsoft Azure™ and Web Services

Information in this document, including URL and other Internet Web site references, is subject to change
without notice. Unless otherwise noted, the example companies, organizations, products, domain names,
e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with
any real company, organization, product, domain name, e-mail address, logo, person, place or event is
intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the
user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in
or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical,
photocopying, recording, or otherwise), or for any purpose, without the express written permission of
Microsoft Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property
rights covering subject matter in this document. Except as expressly provided in any written license
agreement from Microsoft, the furnishing of this document does not give you any license to these
patents, trademarks, copyrights, or other intellectual property.

The names of manufacturers, products, or URLs are provided for informational purposes only and
Microsoft makes no representations and warranties, either expressed, implied, or statutory, regarding
these manufacturers or the use of the products with any Microsoft technologies. The inclusion of a
manufacturer or product does not imply endorsement of Microsoft of the manufacturer or product. Links
may be provided to third party sites. Such sites are not under the control of Microsoft and Microsoft is not
responsible for the contents of any linked site or any link contained in a linked site, or any changes or
updates to such sites. Microsoft is not responsible for webcasting or any other form of transmission
received from any linked site. Microsoft is providing these links to you only as a convenience, and the
inclusion of any link does not imply endorsement of Microsoft of the site or the products contained
therein.

© 2019 Microsoft Corporation. All rights reserved.

Microsoft and the trademarks listed at http://www.microsoft.com/trademarks are trademarks of the


Microsoft group of companies. All other trademarks are property of their respective owners.

Product Number: 20487D


Part Number: C90-07633

Released: 01/2019
MCT USE ONLY. STUDENT USE PROHIBITED
MICROSOFT LICENSE TERMS
MICROSOFT INSTRUCTOR-LED COURSEWARE

These license terms are an agreement between Microsoft Corporation (or based on where you live, one of its
affiliates) and you. Please read them. They apply to your use of the content accompanying this agreement which
includes the media on which you received it, if any. These license terms also apply to Trainer Content and any
updates and supplements for the Licensed Content unless other terms accompany those items. If so, those terms
apply.

BY ACCESSING, DOWNLOADING OR USING THE LICENSED CONTENT, YOU ACCEPT THESE TERMS.
IF YOU DO NOT ACCEPT THEM, DO NOT ACCESS, DOWNLOAD OR USE THE LICENSED CONTENT.

If you comply with these license terms, you have the rights below for each license you acquire.

1. DEFINITIONS.

a. “Authorized Learning Center” means a Microsoft IT Academy Program Member, Microsoft Learning
Competency Member, or such other entity as Microsoft may designate from time to time.

b. “Authorized Training Session” means the instructor-led training class using Microsoft Instructor-Led
Courseware conducted by a Trainer at or through an Authorized Learning Center.

c. “Classroom Device” means one (1) dedicated, secure computer that an Authorized Learning Center owns
or controls that is located at an Authorized Learning Center’s training facilities that meets or exceeds the
hardware level specified for the particular Microsoft Instructor-Led Courseware.

d. “End User” means an individual who is (i) duly enrolled in and attending an Authorized Training Session
or Private Training Session, (ii) an employee of a MPN Member, or (iii) a Microsoft full-time employee.

e. “Licensed Content” means the content accompanying this agreement which may include the Microsoft
Instructor-Led Courseware or Trainer Content.

f. “Microsoft Certified Trainer” or “MCT” means an individual who is (i) engaged to teach a training session
to End Users on behalf of an Authorized Learning Center or MPN Member, and (ii) currently certified as a
Microsoft Certified Trainer under the Microsoft Certification Program.

g. “Microsoft Instructor-Led Courseware” means the Microsoft-branded instructor-led training course that
educates IT professionals and developers on Microsoft technologies. A Microsoft Instructor-Led
Courseware title may be branded as MOC, Microsoft Dynamics or Microsoft Business Group courseware.

h. “Microsoft IT Academy Program Member” means an active member of the Microsoft IT Academy
Program.

i. “Microsoft Learning Competency Member” means an active member of the Microsoft Partner Network
program in good standing that currently holds the Learning Competency status.

j. “MOC” means the “Official Microsoft Learning Product” instructor-led courseware known as Microsoft
Official Course that educates IT professionals and developers on Microsoft technologies.

k. “MPN Member” means an active Microsoft Partner Network program member in good standing.
MCT USE ONLY. STUDENT USE PROHIBITED
l. “Personal Device” means one (1) personal computer, device, workstation or other digital electronic device
that you personally own or control that meets or exceeds the hardware level specified for the particular
Microsoft Instructor-Led Courseware.

m. “Private Training Session” means the instructor-led training classes provided by MPN Members for
corporate customers to teach a predefined learning objective using Microsoft Instructor-Led Courseware.
These classes are not advertised or promoted to the general public and class attendance is restricted to
individuals employed by or contracted by the corporate customer.

n. “Trainer” means (i) an academically accredited educator engaged by a Microsoft IT Academy Program
Member to teach an Authorized Training Session, and/or (ii) a MCT.

o. “Trainer Content” means the trainer version of the Microsoft Instructor-Led Courseware and additional
supplemental content designated solely for Trainers’ use to teach a training session using the Microsoft
Instructor-Led Courseware. Trainer Content may include Microsoft PowerPoint presentations, trainer
preparation guide, train the trainer materials, Microsoft One Note packs, classroom setup guide and Pre-
release course feedback form. To clarify, Trainer Content does not include any software, virtual hard
disks or virtual machines.

2. USE RIGHTS. The Licensed Content is licensed not sold. The Licensed Content is licensed on a one copy
per user basis, such that you must acquire a license for each individual that accesses or uses the Licensed
Content.

2.1 Below are five separate sets of use rights. Only one set of rights apply to you.

a. If you are a Microsoft IT Academy Program Member:


i. Each license acquired on behalf of yourself may only be used to review one (1) copy of the Microsoft
Instructor-Led Courseware in the form provided to you. If the Microsoft Instructor-Led Courseware is
in digital format, you may install one (1) copy on up to three (3) Personal Devices. You may not
install the Microsoft Instructor-Led Courseware on a device you do not own or control.
ii. For each license you acquire on behalf of an End User or Trainer, you may either:
1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one (1) End
User who is enrolled in the Authorized Training Session, and only immediately prior to the
commencement of the Authorized Training Session that is the subject matter of the Microsoft
Instructor-Led Courseware being provided, or
2. provide one (1) End User with the unique redemption code and instructions on how they can
access one (1) digital version of the Microsoft Instructor-Led Courseware, or
3. provide one (1) Trainer with the unique redemption code and instructions on how they can
access one (1) Trainer Content,
provided you comply with the following:
iii. you will only provide access to the Licensed Content to those individuals who have acquired a valid
license to the Licensed Content,
iv. you will ensure each End User attending an Authorized Training Session has their own valid licensed
copy of the Microsoft Instructor-Led Courseware that is the subject of the Authorized Training
Session,
v. you will ensure that each End User provided with the hard-copy version of the Microsoft Instructor-
Led Courseware will be presented with a copy of this agreement and each End User will agree that
their use of the Microsoft Instructor-Led Courseware will be subject to the terms in this agreement
prior to providing them with the Microsoft Instructor-Led Courseware. Each individual will be required
to denote their acceptance of this agreement in a manner that is enforceable under local law prior to
their accessing the Microsoft Instructor-Led Courseware,
vi. you will ensure that each Trainer teaching an Authorized Training Session has their own valid
licensed copy of the Trainer Content that is the subject of the Authorized Training Session,
MCT USE ONLY. STUDENT USE PROHIBITED
vii. you will only use qualified Trainers who have in-depth knowledge of and experience with the
Microsoft technology that is the subject of the Microsoft Instructor-Led Courseware being taught for
all your Authorized Training Sessions,
viii. you will only deliver a maximum of 15 hours of training per week for each Authorized Training
Session that uses a MOC title, and
ix. you acknowledge that Trainers that are not MCTs will not have access to all of the trainer resources
for the Microsoft Instructor-Led Courseware.

b. If you are a Microsoft Learning Competency Member:


i. Each license acquired on behalf of yourself may only be used to review one (1) copy of the Microsoft
Instructor-Led Courseware in the form provided to you. If the Microsoft Instructor-Led Courseware is
in digital format, you may install one (1) copy on up to three (3) Personal Devices. You may not
install the Microsoft Instructor-Led Courseware on a device you do not own or control.
ii. For each license you acquire on behalf of an End User or Trainer, you may either:
1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one (1) End
User attending the Authorized Training Session and only immediately prior to the
commencement of the Authorized Training Session that is the subject matter of the Microsoft
Instructor-Led Courseware provided, or
2. provide one (1) End User attending the Authorized Training Session with the unique redemption
code and instructions on how they can access one (1) digital version of the Microsoft Instructor-
Led Courseware, or
3. you will provide one (1) Trainer with the unique redemption code and instructions on how they
can access one (1) Trainer Content,
provided you comply with the following:
iii. you will only provide access to the Licensed Content to those individuals who have acquired a valid
license to the Licensed Content,
iv. you will ensure that each End User attending an Authorized Training Session has their own valid
licensed copy of the Microsoft Instructor-Led Courseware that is the subject of the Authorized
Training Session,
v. you will ensure that each End User provided with a hard-copy version of the Microsoft Instructor-Led
Courseware will be presented with a copy of this agreement and each End User will agree that their
use of the Microsoft Instructor-Led Courseware will be subject to the terms in this agreement prior to
providing them with the Microsoft Instructor-Led Courseware. Each individual will be required to
denote their acceptance of this agreement in a manner that is enforceable under local law prior to
their accessing the Microsoft Instructor-Led Courseware,
vi. you will ensure that each Trainer teaching an Authorized Training Session has their own valid
licensed copy of the Trainer Content that is the subject of the Authorized Training Session,
vii. you will only use qualified Trainers who hold the applicable Microsoft Certification credential that is
the subject of the Microsoft Instructor-Led Courseware being taught for your Authorized Training
Sessions,
viii. you will only use qualified MCTs who also hold the applicable Microsoft Certification credential that is
the subject of the MOC title being taught for all your Authorized Training Sessions using MOC,
ix. you will only provide access to the Microsoft Instructor-Led Courseware to End Users, and
x. you will only provide access to the Trainer Content to Trainers.
MCT USE ONLY. STUDENT USE PROHIBITED
c. If you are a MPN Member:
i. Each license acquired on behalf of yourself may only be used to review one (1) copy of the Microsoft
Instructor-Led Courseware in the form provided to you. If the Microsoft Instructor-Led Courseware is
in digital format, you may install one (1) copy on up to three (3) Personal Devices. You may not
install the Microsoft Instructor-Led Courseware on a device you do not own or control.
ii. For each license you acquire on behalf of an End User or Trainer, you may either:
1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one (1) End
User attending the Private Training Session, and only immediately prior to the commencement
of the Private Training Session that is the subject matter of the Microsoft Instructor-Led
Courseware being provided, or
2. provide one (1) End User who is attending the Private Training Session with the unique
redemption code and instructions on how they can access one (1) digital version of the
Microsoft Instructor-Led Courseware, or
3. you will provide one (1) Trainer who is teaching the Private Training Session with the unique
redemption code and instructions on how they can access one (1) Trainer Content,
provided you comply with the following:
iii. you will only provide access to the Licensed Content to those individuals who have acquired a valid
license to the Licensed Content,
iv. you will ensure that each End User attending an Private Training Session has their own valid licensed
copy of the Microsoft Instructor-Led Courseware that is the subject of the Private Training Session,
v. you will ensure that each End User provided with a hard copy version of the Microsoft Instructor-Led
Courseware will be presented with a copy of this agreement and each End User will agree that their
use of the Microsoft Instructor-Led Courseware will be subject to the terms in this agreement prior to
providing them with the Microsoft Instructor-Led Courseware. Each individual will be required to
denote their acceptance of this agreement in a manner that is enforceable under local law prior to
their accessing the Microsoft Instructor-Led Courseware,
vi. you will ensure that each Trainer teaching an Private Training Session has their own valid licensed
copy of the Trainer Content that is the subject of the Private Training Session,
vii. you will only use qualified Trainers who hold the applicable Microsoft Certification credential that is
the subject of the Microsoft Instructor-Led Courseware being taught for all your Private Training
Sessions,
viii. you will only use qualified MCTs who hold the applicable Microsoft Certification credential that is the
subject of the MOC title being taught for all your Private Training Sessions using MOC,
ix. you will only provide access to the Microsoft Instructor-Led Courseware to End Users, and
x. you will only provide access to the Trainer Content to Trainers.

d. If you are an End User:


For each license you acquire, you may use the Microsoft Instructor-Led Courseware solely for your
personal training use. If the Microsoft Instructor-Led Courseware is in digital format, you may access the
Microsoft Instructor-Led Courseware online using the unique redemption code provided to you by the
training provider and install and use one (1) copy of the Microsoft Instructor-Led Courseware on up to
three (3) Personal Devices. You may also print one (1) copy of the Microsoft Instructor-Led Courseware.
You may not install the Microsoft Instructor-Led Courseware on a device you do not own or control.

e. If you are a Trainer.


i. For each license you acquire, you may install and use one (1) copy of the Trainer Content in the
form provided to you on one (1) Personal Device solely to prepare and deliver an Authorized
Training Session or Private Training Session, and install one (1) additional copy on another Personal
Device as a backup copy, which may be used only to reinstall the Trainer Content. You may not
install or use a copy of the Trainer Content on a device you do not own or control. You may also
print one (1) copy of the Trainer Content solely to prepare for and deliver an Authorized Training
Session or Private Training Session.
MCT USE ONLY. STUDENT USE PROHIBITED
ii. You may customize the written portions of the Trainer Content that are logically associated with
instruction of a training session in accordance with the most recent version of the MCT agreement.
If you elect to exercise the foregoing rights, you agree to comply with the following: (i)
customizations may only be used for teaching Authorized Training Sessions and Private Training
Sessions, and (ii) all customizations will comply with this agreement. For clarity, any use of
“customize” refers only to changing the order of slides and content, and/or not using all the slides or
content, it does not mean changing or modifying any slide or content.

2.2 Separation of Components. The Licensed Content is licensed as a single unit and you may not
separate their components and install them on different devices.

2.3 Redistribution of Licensed Content. Except as expressly provided in the use rights above, you may
not distribute any Licensed Content or any portion thereof (including any permitted modifications) to any
third parties without the express written permission of Microsoft.

2.4 Third Party Notices. The Licensed Content may include third party code tent that Microsoft, not the
third party, licenses to you under this agreement. Notices, if any, for the third party code ntent are included
for your information only.

2.5 Additional Terms. Some Licensed Content may contain components with additional terms,
conditions, and licenses regarding its use. Any non-conflicting terms in those conditions and licenses also
apply to your use of that respective component and supplements the terms described in this agreement.

3. LICENSED CONTENT BASED ON PRE-RELEASE TECHNOLOGY. If the Licensed Content’s subject


matter is based on a pre-release version of Microsoft technology (“Pre-release”), then in addition to the
other provisions in this agreement, these terms also apply:

a. Pre-Release Licensed Content. This Licensed Content subject matter is on the Pre-release version of
the Microsoft technology. The technology may not work the way a final version of the technology will
and we may change the technology for the final version. We also may not release a final version.
Licensed Content based on the final version of the technology may not contain the same information as
the Licensed Content based on the Pre-release version. Microsoft is under no obligation to provide you
with any further content, including any Licensed Content based on the final version of the technology.

b. Feedback. If you agree to give feedback about the Licensed Content to Microsoft, either directly or
through its third party designee, you give to Microsoft without charge, the right to use, share and
commercialize your feedback in any way and for any purpose. You also give to third parties, without
charge, any patent rights needed for their products, technologies and services to use or interface with
any specific parts of a Microsoft technology, Microsoft product, or service that includes the feedback.
You will not give feedback that is subject to a license that requires Microsoft to license its technology,
technologies, or products to third parties because we include your feedback in them. These rights
survive this agreement.

c. Pre-release Term. If you are an Microsoft IT Academy Program Member, Microsoft Learning
Competency Member, MPN Member or Trainer, you will cease using all copies of the Licensed Content on
the Pre-release technology upon (i) the date which Microsoft informs you is the end date for using the
Licensed Content on the Pre-release technology, or (ii) sixty (60) days after the commercial release of the
technology that is the subject of the Licensed Content, whichever is earliest (“Pre-release term”).
Upon expiration or termination of the Pre-release term, you will irretrievably delete and destroy all copies
of the Licensed Content in your possession or under your control.
MCT USE ONLY. STUDENT USE PROHIBITED
4. SCOPE OF LICENSE. The Licensed Content is licensed, not sold. This agreement only gives you some
rights to use the Licensed Content. Microsoft reserves all other rights. Unless applicable law gives you more
rights despite this limitation, you may use the Licensed Content only as expressly permitted in this
agreement. In doing so, you must comply with any technical limitations in the Licensed Content that only
allows you to use it in certain ways. Except as expressly permitted in this agreement, you may not:
• access or allow any individual to access the Licensed Content if they have not acquired a valid license
for the Licensed Content,
• alter, remove or obscure any copyright or other protective notices (including watermarks), branding
or identifications contained in the Licensed Content,
• modify or create a derivative work of any Licensed Content,
• publicly display, or make the Licensed Content available for others to access or use,
• copy, print, install, sell, publish, transmit, lend, adapt, reuse, link to or post, make available or
distribute the Licensed Content to any third party,
• work around any technical limitations in the Licensed Content, or
• reverse engineer, decompile, remove or otherwise thwart any protections or disassemble the
Licensed Content except and only to the extent that applicable law expressly permits, despite this
limitation.

5. RESERVATION OF RIGHTS AND OWNERSHIP. Microsoft reserves all rights not expressly granted to
you in this agreement. The Licensed Content is protected by copyright and other intellectual property laws
and treaties. Microsoft or its suppliers own the title, copyright, and other intellectual property rights in the
Licensed Content.

6. EXPORT RESTRICTIONS. The Licensed Content is subject to United States export laws and regulations.
You must comply with all domestic and international export laws and regulations that apply to the Licensed
Content. These laws include restrictions on destinations, end users and end use. For additional information,
see www.microsoft.com/exporting.

7. SUPPORT SERVICES. Because the Licensed Content is “as is”, we may not provide support services for it.

8. TERMINATION. Without prejudice to any other rights, Microsoft may terminate this agreement if you fail
to comply with the terms and conditions of this agreement. Upon termination of this agreement for any
reason, you will immediately stop all use of and delete and destroy all copies of the Licensed Content in
your possession or under your control.

9. LINKS TO THIRD PARTY SITES. You may link to third party sites through the use of the Licensed
Content. The third party sites are not under the control of Microsoft, and Microsoft is not responsible for
the contents of any third party sites, any links contained in third party sites, or any changes or updates to
third party sites. Microsoft is not responsible for webcasting or any other form of transmission received
from any third party sites. Microsoft is providing these links to third party sites to you only as a
convenience, and the inclusion of any link does not imply an endorsement by Microsoft of the third party
site.

10. ENTIRE AGREEMENT. This agreement, and any additional terms for the Trainer Content, updates and
supplements are the entire agreement for the Licensed Content, updates and supplements.

11. APPLICABLE LAW.


a. United States. If you acquired the Licensed Content in the United States, Washington state law governs
the interpretation of this agreement and applies to claims for breach of it, regardless of conflict of laws
principles. The laws of the state where you live govern all other claims, including claims under state
consumer protection laws, unfair competition laws, and in tort.
MCT USE ONLY. STUDENT USE PROHIBITED
b. Outside the United States. If you acquired the Licensed Content in any other country, the laws of that
country apply.

12. LEGAL EFFECT. This agreement describes certain legal rights. You may have other rights under the laws
of your country. You may also have rights with respect to the party from whom you acquired the Licensed
Content. This agreement does not change your rights under the laws of your country if the laws of your
country do not permit it to do so.

13. DISCLAIMER OF WARRANTY. THE LICENSED CONTENT IS LICENSED "AS-IS" AND "AS
AVAILABLE." YOU BEAR THE RISK OF USING IT. MICROSOFT AND ITS RESPECTIVE
AFFILIATES GIVES NO EXPRESS WARRANTIES, GUARANTEES, OR CONDITIONS. YOU MAY
HAVE ADDITIONAL CONSUMER RIGHTS UNDER YOUR LOCAL LAWS WHICH THIS AGREEMENT
CANNOT CHANGE. TO THE EXTENT PERMITTED UNDER YOUR LOCAL LAWS, MICROSOFT AND
ITS RESPECTIVE AFFILIATES EXCLUDES ANY IMPLIED WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.

14. LIMITATION ON AND EXCLUSION OF REMEDIES AND DAMAGES. YOU CAN RECOVER FROM
MICROSOFT, ITS RESPECTIVE AFFILIATES AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP
TO US$5.00. YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL,
LOST PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES.

This limitation applies to


o anything related to the Licensed Content, services, content (including code) on third party Internet
sites or third-party programs; and
o claims for breach of contract, breach of warranty, guarantee or condition, strict liability, negligence,
or other tort to the extent permitted by applicable law.

It also applies even if Microsoft knew or should have known about the possibility of the damages. The
above limitation or exclusion may not apply to you because your country may not allow the exclusion or
limitation of incidental, consequential or other damages.

Please note: As this Licensed Content is distributed in Quebec, Canada, some of the clauses in this
agreement are provided below in French.

Remarque : Ce le contenu sous licence étant distribué au Québec, Canada, certaines des clauses
dans ce contrat sont fournies ci-dessous en français.

EXONÉRATION DE GARANTIE. Le contenu sous licence visé par une licence est offert « tel quel ». Toute
utilisation de ce contenu sous licence est à votre seule risque et péril. Microsoft n’accorde aucune autre garantie
expresse. Vous pouvez bénéficier de droits additionnels en vertu du droit local sur la protection dues
consommateurs, que ce contrat ne peut modifier. La ou elles sont permises par le droit locale, les garanties
implicites de qualité marchande, d’adéquation à un usage particulier et d’absence de contrefaçon sont exclues.

LIMITATION DES DOMMAGES-INTÉRÊTS ET EXCLUSION DE RESPONSABILITÉ POUR LES


DOMMAGES. Vous pouvez obtenir de Microsoft et de ses fournisseurs une indemnisation en cas de dommages
directs uniquement à hauteur de 5,00 $ US. Vous ne pouvez prétendre à aucune indemnisation pour les autres
dommages, y compris les dommages spéciaux, indirects ou accessoires et pertes de bénéfices.
Cette limitation concerne:
• tout ce qui est relié au le contenu sous licence, aux services ou au contenu (y compris le code)
figurant sur des sites Internet tiers ou dans des programmes tiers; et.
• les réclamations au titre de violation de contrat ou de garantie, ou au titre de responsabilité
stricte, de négligence ou d’une autre faute dans la limite autorisée par la loi en vigueur.
MCT USE ONLY. STUDENT USE PROHIBITED
Elle s’applique également, même si Microsoft connaissait ou devrait connaître l’éventualité d’un tel dommage. Si
votre pays n’autorise pas l’exclusion ou la limitation de responsabilité pour les dommages indirects, accessoires
ou de quelque nature que ce soit, il se peut que la limitation ou l’exclusion ci-dessus ne s’appliquera pas à votre
égard.

EFFET JURIDIQUE. Le présent contrat décrit certains droits juridiques. Vous pourriez avoir d’autres droits
prévus par les lois de votre pays. Le présent contrat ne modifie pas les droits que vous confèrent les lois de votre
pays si celles-ci ne le permettent pas.

Revised July 2013


MCT USE ONLY. STUDENT USE PROHIBITED
xi Developing Microsoft Azure™ and Web Services
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services xii

Acknowledgements
Microsoft Learning wants to acknowledge and thank the following for their contribution toward
developing this title. Their effort at various stages in the development has ensured that you have a good
classroom experience.

Ishai Ram – Content Development Lead

Ishai is the Vice President of SELA Group. He has over 20 years of experience as a professional trainer and
consultant on computer software and electronics.

Baruch Toledano – Senior Content Developer

Baruch is a senior project manager at SELA Group. He has extensive experience in producing Microsoft
Official Courses and managing software development projects. Baruch is also a lecturer at SELA College
delivering a variety of development courses.

Sasha Goldshtein – Subject Matter Expert

Sasha Goldshtein is the CTO at Sela Group, a Microsoft C# MVP and Regional Director, a Pluralsight and
O'Reilly author, and an international consultant and trainer. Sasha is the author of "Introducing Windows
7 for Developers" (Microsoft Press, 2009) and "Pro .NET Performance" (Apress, 2012). His is also a prolific
blogger and open-source contributor, and author of numerous training courses including .NET
Debugging, .NET Performance, Android Application Development, and Modern C++. His consulting work
revolves mainly around distributed architecture, production debugging and performance diagnostics, and
mobile application development.

Avi Avni – Subject Matter Expert

Avi Avni is a consultant and an instructor at SELA Group, with 7+ years of industry experience. Avi
specializes in design and development of large-scale applications and diagnosing memory and CPU
performance issues. Avi has 2+ years of experience as a team leader. Avi is a contributor to several open-
source projects such as F# Compiler, CoreCLR, Roslyn, and ClrMD.

Viacheslav Brekel - Subject Matter Expert

Viacheslav is a senior developer and lecturer at SELA Group. Viacheslav has six years of experience in
developing and maintaining large-scale solutions in a variety of technologies. Viacheslav is a proficient
problem solver and content developer. Viacheslav’s main technology interests vary between web and
desktop development.

Roi Godelman- Subject Matter Expert

Roi is a senior developer and lecturer at SELA Group. Roi has over five years of experience in developing
desktop, web, and mobile applications. Roi is a full stack developer, specializing in both front-end and
back-end development. Roi delivers many courses in the IT industry.
MCT USE ONLY. STUDENT USE PROHIBITED
xiii Developing Microsoft Azure™ and Web Services

Shalev Zahavi- Subject Matter Expert

Shalev is a senior developer and lecturer at SELA Group. Shalev has over five years of experience in
software development and a proven track record in development of large-scale hybrid applications.
Shalev’s main interest is in back-end solution development. Shalev delivers many training sessions in the
industry. His other fields of interest include Azure development, web development, and mobile
development.

Apposite Learning & SELA Teams – Content Contributors

Shelly Aharoni, Naor Michelsohn, Amith Vincent, Kavitha Ravipati, Vinay Antony, Dhananjaya Punugoti,
and the Enfec Team.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services xiv

Contents
Module 1: Overview of Service and Cloud Technologies
Module Overview 1-1

Lesson 1: Key Components of Distributed Applications 1-2


Lesson 2: Data and Data Access Technologies 1-6

Lesson 3: Service Technologies 1-10

Lesson 4: Cloud Computing 1-13


Lab: Exploring the Work Environment 1-21
Module Review and Takeaways 1-22

Module 2: Querying and Manipulating Data Using Entity Framework Core


Module Overview 2-1
Lesson 1: ADO.NET Overview 2-2

Lesson 2: Creating an Entity Data Model 2-6

Lesson 3: Querying Data 2-18


Lab A: Creating a Data Access Layer using Entity Framework 2-23
Lesson 4: Manipulating Data 2-24

Lab B: Manipulating Data 2-34


Module Review and Takeaways 2-35

Module 3: Creating and Consuming ASP.NET Core Web APIs


Module Overview 3-1

Lesson 1: HTTP Services 3-2

Lesson 2: Creating an ASP.NET Core Web API 3-13


Lesson 3: Consuming ASP.NET Core Web APIs 3-19

Lab: Creating an ASP.NET Core Web API 3-24

Lesson 4: Handling HTTP Requests and Responses 3-25


Lesson 5: Automatically Generating HTTP Requests and Responses 3-29

Module Review and Takeaways 3-43

Module 4: Extending ASP.NET Core HTTP Services


Module Overview 4-1
Lesson 1: The ASP.NET Core Request Pipeline 4-2
MCT USE ONLY. STUDENT USE PROHIBITED
xv Developing Microsoft Azure™ and Web Services

Lesson 2: Customizing Controllers and Actions 4-7

Lab: Customizing the ASP.NET Core Pipeline 4-13

Lesson 3: Injecting Dependencies into Controllers 4-14


Module Review and Takeaways 4-17

Module 5: Hosting Services


Module Overview 5-1

Lesson 1: Hosting services on-premises 5-3

Lab A: Host an ASP.NET Core Service in a Windows Service 5-7

Lesson 2: Hosting Services in Azure Web Apps 5-8

Lab B: Host an ASP.NET Core Web API in an Azure Web App 5-16
Lesson 3: Packaging services in containers 5-17

Lab C: Host an ASP.NET Core service in Azure Container Instances 5-30

Lesson 4: Implementing serverless services 5-31


Lab D: Implementing an Azure Function 5-43
Module Review and Takeaways 5-44

Module 6: Deploying and Managing Services


Module Overview 6-1
Lesson 1: Web Deployment with Visual Studio 2017 6-2
Lesson 2: Web Deployment on Linux 6-8

Lab A: Deploying an ASP.NET Core Web Service on Linux 6-15


Lesson 3: Continuous Delivery with Visual Studio Team Services 6-16
Lesson 4: Deploying Applications to staging and Production Environments 6-23

Lab B: Deploying to Staging and Production 6-27


Lesson 5: Defining Service Interfaces with API Management 6-28
Lab C: Publishing a Web API with Azure API Management 6-38

Module Review and Takeaways 6-39

Module 7: Implementing Data Storage in Azure


Module Overview 7-1
Lesson 1: Choosing a Data Storage Mechanism 7-3

Lesson 2: Accessing Data in Azure Storage 7-7

Lab A: Storing Files in Azure Storage 7-15


MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services xvi

Lesson 3: Working with Structured Data in Azure 7-16

Lab B: Querying Graph Data with Azure Cosmos DB 7-30

Lesson 4: Geographically Distributing Data with Content Delivery Network 7-31

Lesson 5: Scaling with Out-of-Process Cache 7-38

Lab C: Caching Out-of-Process with Azure Redis Cache 7-43

Module Review and Takeaways 7-44

Module 8: Monitoring and Diagnostics


Module Overview 8-1

Lesson 1: Logging in ASP.NET Core 8-2

Lesson 2: Diagnostic Tools 8-11


Lab A: Monitoring ASP.NET Core with ETW and LTTng 8-23

Lesson 3: Application Insights 8-24

Lab B: Monitoring Azure Web Apps with Application Insights 8-38


Module Review and Takeaways 8-39

Module 9: Securing Services On-premises and in Microsoft Azure


Module Overview 9-1

Lesson 1: Explaining Security Terminology 9-2


Lesson 2: Securing Services with ASP.NET Core Identity 9-9
Lab A: Using ASP.NET Core Identity 9-22

Lesson 3: Securing Services with Azure AD 9-23


Lab B: Using Azure Active Directory with ASP.NET Core 9-44
Module Review and Takeaways 9-45

Module 10: Scaling Services


Module Overview 10-1

Lesson 1: Introduction to Scalability 10-2


Lab A: Load Balancing Azure Web Apps 10-8

Lesson 2: Automatic Scaling 10-9

Lesson 3: Application Gateway and Traffic Manager 10-15


Lab B: Load Balancing with Azure Traffic Manager 10-25

Module Review and Takeaways 10-26


MCT USE ONLY. STUDENT USE PROHIBITED
About This Course xvii

About This Course


This section provides a brief description of the course, audience, suggested prerequisites, and course
objectives.

Course Description
This course will provide you with the knowledge and skills to design and develop services that access local
and remote data from various sources. You will learn how to develop and deploy services to hybrid
environments, including on-premises servers and Microsoft Azure.

This course will help you prepare for the 70-847 exam.

Audience
This course is intended for both novice and experienced Microsoft .NET developers who have a minimum
of six months programming experience and want to learn how to develop services and deploy them to
hybrid environments.

Student Prerequisites
Before attending this course, students must have at least six months of programming experience. In
addition, the students must meet the following prerequisites.
• Experience with Microsoft Visual Studio 2017 or later.

• Familiarity with Microsoft ASP.NET.


• Experience with C# programming and concepts such as Lambda expressions, LINQ, and anonymous
types.

• An understanding of the concepts of n-tier applications.


• Experience with querying and manipulating data with ADO.NET.

• Knowledge of XML and JSON data structures.

Course Objectives
After completing this course, students will be able to:

• Describe the features and functionalities of service and cloud technologies.


• Query and manipulate data with Entity Framework Core.

• Use ASP.NET Core Web API to create HTTP-based services and consume them from .NET and non-
.NET clients.
• Extend ASP.NET Core Web API services by using middleware, action filters, and media type
formatters.

• Host services on on-premises servers and various Azure environments such as Azure Web Apps, Azure
Container Instance, and Azure Functions.

• Deploy services on both on-premises servers and Azure.

• Choose a data storage solution, and cache, distribute, and synchronize data.
• Monitor and log services, both on-premises and in Azure.

• Describe the claim-based identity concepts and standards.

• Implement authentication and authorization with Azure Active Directory (Azure AD).
MCT USE ONLY. STUDENT USE PROHIBITED
xviii About This Course

• Create scalable and load-balanced services.

Course Outline
The course outline is as follows:

Module 1. Overview of Service and Cloud Technologies


This module provides an overview of service and cloud technologies using .NET Core and the Azure cloud.
The module describes distributed applications, service technologies, and how cloud computing is
implemented on the Azure platform.

Module 2. Querying and Manipulating Data Using Entity Framework Core

This module explains how to create Entity Framework Core models and use them to query and
manipulate data.
Module 3. Creating and Consuming ASP.NET Core Web APIs

This module explains how to create and consume HTTP-based services by using ASP.NET Core Web API.

Module 4. Extending ASP.NET Core HTTP Services


This module explains how to extend ASP.NET Core web API services to support real-world scenarios such
as exception handling and caching.

Module 5. Hosting Services

This module describes how to host services on various Azure environments such as Azure Web Apps,
Azure Container Instance, and Azure Functions.

Module 6. Deploying and Managing Services


This module explains how to deploy services to both on-premises and cloud environments and using
continuous integration and continuous delivery processes to automate the deployment.

Module 7. Implementing Data Storage in Azure


This module explains how to store and access data stored in Azure Storage. It also explains how to
configure storage access rights for storage containers and content.

Module 8. Monitoring and Diagnostics

This module explains how to monitor and log services, both on-premises and in Azure.

Module 9. Securing Services On-premises and in Microsoft Azure

This module describes claim-based identity concepts and standards, and how to implement
authentication and authorization by using Azure AD to secure an ASP.NET Core Web API service.
Module 10. Scaling Services

This module explains how to create scalable services and applications and scale them automatically by
using Web Apps load balancers, Azure Application Gateway, and Azure Traffic Manager.
MCT USE ONLY. STUDENT USE PROHIBITED
About This Course ixx

Course Materials
The following materials are included with your kit:

• Course Handbook is a succinct classroom learning guide that provides the critical technical
information in a crisp, tightly focused format, which is essential for an effective in-class learning
experience
You may be accessing either a printed course handbook or digital courseware material via the Skillpipe
reader by Arvato. Your Microsoft Certified Trainer will provide specific details, but both printed and digital
versions contain the following:

o Lessons guide you through the learning objectives and provide the key points that are critical to
the success of the in-class learning experience.

o Labs provide a real-world, hands-on platform for you to apply the knowledge and skills learned
in the module.

o Module Reviews and Takeaways sections provide on-the-job reference material to boost
knowledge and skills retention.

o Lab Answer Keys provide step-by-step lab solution guidance.

To run the labs and demos in this course, use the code and instruction files that are available on GitHub:
• Instruction files: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/tree/master/Instructions

• Code files: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-


Services/tree/master/AllFiles

Make sure to clone the repository to your local machine. Cloning the repository before the course ensures
that you have all the required files without depending on the connectivity in the classroom.

• Course evaluation. At the end of the course, you will have the opportunity to complete an online
evaluation to provide feedback on the course, training facility, and instructor.

o To provide additional comments or feedback, or to report a problem with course resources, visit
the Training Support site at https://trainingsupport.microsoft.com/en-us. To inquire about the
Microsoft Certification Program, send an e-mail to certify@microsoft.com.
MCT USE ONLY. STUDENT USE PROHIBITED
About This Course
xx
MCT USE ONLY. STUDENT USE PROHIBITED
1-1

Module 1
Overview of Service and Cloud Technologies
Contents:
Module Overview 1-1

Lesson 1: Key Components of Distributed Applications 1-2

Lesson 2: Data and Data Access Technologies 1-6


Lesson 3: Service Technologies 1-10

Lesson 4: Cloud Computing 1-13


Lab: Exploring the Work Environment 1-21
Module Review and Takeaways 1-22

Module Overview
This module provides an overview of service and cloud technologies using the Microsoft .NET Core and
Microsoft Azure. The first lesson, “Key Components of Distributed Applications,” discusses characteristics
that are common to distributed systems, regardless of the technologies they use. Lesson 2, “Data and Data
Access Technologies” describes how data is used in distributed applications. Lesson 3, “Service
Technologies,” discusses two of the most common protocols in a distributed system and the .NET Core
technologies used to develop services based on those protocols. Lesson 4, “Cloud Computing,” describes
cloud computing and how it is implemented in Azure.

Note: The Azure portal user interface (UI) and Azure dialog boxes in Visual Studio 2017 are
updated frequently when new Azure components and SDKs for .NET are released. Therefore, it is
possible that some differences will exist between screen shots and steps shown in this module,
and the actual UI you encounter in the Azure portal and Visual Studio 2017.

Objectives
After completing this module, you will be able to:

• Explain services architecture and hosting environments.


• Explain cloud computing and the Microsoft Azure cloud platform.

• Explain data access strategies.


MCT USE ONLY. STUDENT USE PROHIBITED
1-2 Overview of Service and Cloud Technologies

Lesson 1
Key Components of Distributed Applications
Users today expect applications to present and process information from varied data sources, which might
be geographically distributed. Modern applications must also support different platforms such as mobile
and desktop, in addition to providing up-to-date information and an appealing UI.

Designing such applications is not a trivial task and involves collaboration and integration between several
groups of components.
This lesson describes the key components and architecture of modern distributed applications.

Lesson Objectives
After completing this lesson, you will be able to:
• Describe the basic characteristics of distributed applications.

• Describe the logical layers that constitute a distributed application.

Characteristics of Distributed Applications


Today’s data is distributed by nature. People share
data with family, friends, and colleagues.
Companies share data with partners and
customers, and applications share data on the
web.

Customers expect applications to be always


connected and to fetch all the information they
need when they need it.
The virtual world does not have borders and data
must be available across technologies and
platforms.

Modern applications can run multiple instances on a variety of different platforms, yet they are expected
to have access to the same data and always stay in sync.

Data is distributed between data centers, private computers, and mobile devices. Data should be secured
and private, but at the same time available to its owners and legitimate customers. Today, both data and
the number of users has increased exponentially. Applications must provide services to access data and
maintain high-quality standards in terms of availability and performance.

The only way to achieve availability and performance is by collaboration and distribution of load. An
application can achieve its performance requirements by distributing the computing load across multiple
servers. By using many web servers that are geographically distributed, you also increase the high
availability of your applications. Applications also consume data to provide a rich set of functionalities
from a variety of data sources and share their data. Finally, applications replicate cache and centralize data
to provide the best user experience.

It is simply impossible to provide a modern, high-scale application within the borders of a traditional
single computer. Today, data and computing distribution is a necessity.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 1-3

The following basic characteristics describe distributed systems:

• Scalability
• Availability

• Latency

• Reliability
• Security and privacy

Scalability
Distributed systems provide value by using the collaboration of a group of services and clients that are
geographically distributed. Each service must serve many requests originating from different clients. A
scalable service can provide service to a growing number of clients. Scalability is measured by the ratio of
the growth in the number of customers and the growth in the infrastructure required. You can achieve
scalability by using an appropriate design, such as designing stateless services so you can run them on
multiple computers and integrating distributed cache solutions for services that need to share their state
between computers.

Availability
Today’s systems serve a global audience, located around the world in different time zones. Services must
be available 100 percent of the time and be resilient to connectivity or performance issues. You can
achieve high-availability in a distributed environment by using design guidelines such as fail-over services
and appropriate decoupling between services.

Latency
Latency is the delay introduced by a system when responding to a single request. Users expect
applications to present valuable information without any unnecessary delays. The information must always
be available, the application must be responsive, and the user experience must be smooth. To provide a
seamless user experience, services must have a short response time. If the service introduces a long delay,
the experience is not considered to be smooth. When designing a system to have low latency, you should
consider concepts such as caching data, parallelizing tasks, and reducing the size of payloads for both
requests and responses.

Reliability
Information is a valuable asset. Clients expect distributed applications to store their data reliably and
make sure that it is never lost or damaged. Keeping data consistent might not be trivial in a distributed
environment where multiple instances might handle the same piece of data concurrently. Data must be
replicated and geographically distributed to handle the risk of hardware failure of any kind.

Security and Privacy


One of the greatest concerns when dealing with distributed systems is security and privacy.

The fact that the system is distributed means that data will be distributed as well. Yet the system must
ensure that only legitimate stakeholders get access to it at any time. Often distributed systems have no
boundaries and are accessible to anyone through the internet. This can include potential attackers who
wish to harm the system and disturb its normal behavior. Proper security design that incorporates
concepts such as communication encryption, authentication, and authorization, can reduce the risk of
information disclosure, denial of service, and data theft.

Question: What could be the consequences if a system cannot be scaled?


MCT USE ONLY. STUDENT USE PROHIBITED
1-4 Overview of Service and Cloud Technologies

Logical Layers of Distributed Applications


As a developer of distributed systems, you are
often required to troubleshoot complicated issues.
It is often simpler to break down a complicated
task into smaller ones that are easier to resolve.
Separation of concerns is an approach that can
help you simplify the issues by splitting the larger
process into simpler tasks.

Separating the responsibilities between different


components helps you achieve better
maintainability, testability, and agility. It is easier
to test each layer separately rather than testing
the whole system together. Systems that are
usually hard to test are not tested at all, and if applications are not properly designed and poorly tested,
they will fail during production. Maintenance complexity is relative to application complexity and so
proper separation can help. At the same time, integration introduces its own maintenance challenges and
these too must be taken into consideration.

The responsibilities in a distributed system can be divided as follows:

• Data layer
• Execution layer

• Service layer

• User Interface layer

Data Layer
The data layer is responsible for storing and accessing data. The data layer is responsible for storing,
querying, updating, or deleting the data as required while maintaining a reasonable performance. This
can be a complicated task when you are dealing with a large set of data, distributed across several data
sources.
The data manipulation policy depends on the data type and its properties. Data can be replicated,
distributed, and handled according to its characteristics. For example, client contacts can be replicated
across the data center because they change slowly. However, information about stocks must be always
accurate and therefore must be read from a single source.

Execution Layer
The execution layer contains the business logic and is responsible for carrying out the use-case scenarios
of the application. In other words, the execution layer implements the logic of the application. The
business logic uses the data layer to read and store data, and the UI layer to interact with the client. The
execution layer contains all the algorithms and logic of the application and is considered the brain of the
application.

Service Layer
The service layer exposes some of the capabilities of the application to the world as services. Other
applications might consume these services and use them as a data source or as a remote execution
engine.

The service layer acts as the interface for other applications, in contrast to the user interface layer, which
targets humans. The service layer drives collaboration of applications and enables distribution of
computing load and data. It is responsible for defining a contract that consumers must maintain to use
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 1-5

the service. It enforces security policies, validates incoming requests, and maintains the application
resources.

User Interface Layer


The user interface layer is the layer through which users interact with the application. It visually depicts
the data and operations of the application effectively and provides the users a simplified medium for
consuming the application data. While designing the UI, developers must consider the varying
expectations of different people and cultures. The UI should always be responsive, yet its ability to
respond quickly might be CPU-intensive especially when using modern interfaces such as touchscreens.
The UI can be displayed on a variety of different devices, some of which might have extreme limitations
such as screen size and resolution. Nevertheless, the UI must be effective and present a useful
visualization. The UI must provide simple, yet effective methods for the user to enter data and activate the
business and data layers to store and process it. Proper UI design is crucial because if the UI is not user-
friendly, the application will not be used.
MCT USE ONLY. STUDENT USE PROHIBITED
1-6 Overview of Service and Cloud Technologies

Lesson 2
Data and Data Access Technologies
Our identities, financial status, commercial activities, professional, social relations and more, are persisted
as data, located across various data sources.

Applications access data, process it to provide value, and finally produce some more data for future use.
In this lesson, you will be introduced to various database technologies, along with .NET Core data access
technologies.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe common database technologies.


• Describe data access technologies in the .NET Core.

Storage Technologies
Data can be persisted in a variety of different
formats and in a wide range of infrastructures.
Each infrastructure and format is designed for
different scenarios and data types. Some storage
infrastructures are used to store a huge amount of
data and others have limited capacity. Some
storage infrastructures can execute complex
queries and others cannot. Some can access data
very quickly while others introduce long delays.

Data entities can be organized in different types


of models, such as relational, hierarchical, and the
object-oriented model. In a relational model,
entities are persisted as tables consisting of rows and columns with predefined relations between them.
The relational model is the conceptual basis of relational databases such as Microsoft SQL® Server. In a
hierarchical model, entities are organized in a tree. Each entity might have one parent and multiple
children. Trees are easy to express in text, for example, in XML or JavaScript Object Notation (JSON)
documents. The object-oriented model is used to process data entities inside applications as most modern
languages are based on the object-oriented programming (OOP) approach.
Data can be persisted in a wide range of data sources such as relational databases, file-systems,
distributed file systems, distributed caches, NoSQL databases, cloud-storage, and in-memory stores.

Relational Databases
SQL Server databases and the Microsoft Azure SQL Database are the traditional large-scale data sources.
They are designed to store relational data and can execute complex queries and user-defined functions.
Queries are written declaratively in languages such as T-SQL and can execute Create, Read, Update, and
Delete (CRUD) operations.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 1-7

File System
A file system is used to store and retrieve unstructured data on disks. The basic unit of data is a file. Files
are organized in a tree of directories that have a volume as its root. Operating systems such as Windows
and Linux use file systems as their basic storage system.

Distributed File System


A distributed file system provides the simplicity and data model of a file system, and at the same time
solves the size limitation derived from the disk size. A distributed file system is an arrangement of
networked computers that store data files and the users are exposed only to an abstraction of a single file
system. The distributed file system is transparent to the user.

Distributed Caches
Data access from relational databases is considered a long operation. To reduce latency, some data can be
cached in-memory, yet the size of such a cache is limited. Distributed in-memory cache solves the size
limitation by using an arrangement of networked computers, which store in-memory data as key-value
pairs and provides an experience that mimics a single cache to the end user. Distributed caches will be
discussed in Module 12, "Scaling Services" of Course 20487.

NoSQL Databases
NoSQL databases are an umbrella for many types of data stores, Each store data in a non-relational
fashion. NoSQL databases are often used to store large amounts of data. These data stores are schema-
free, but data can be organized in a variety of different models such as document database, key-value
store, columnar database, or graph database.

Azure Cosmos DB
Azure Cosmos DB is Microsoft's globally distributed database. It offers great scalability and availability
capabilities. It also supports different models such as:

• Document database with MongoDB API

• Columnar database with Cassandra API


• Graph database with Gremlin API

• Key-value database with Table API

Cloud-Storage
Infrastructures such as Microsoft Azure Storage enable cloud and on-premises applications to store their
data, which can be structured or unstructured, on a high-scale and persistent data store. Storage exposes
an interoperable API based on HTTP that can be used by any application running on any platform.

Microsoft Azure Table storage can be referred to as a key-value no-SQL database in the cloud, and
Microsoft Azure Blob storage is like a huge file system in the cloud. Storage will be discussed in Module 7,
"Implementing data storage in Microsoft Azure" of Course 20487.

In-Memory Stores
In-memory stores are the fastest data store but are limited in size, not persistent, and hard to use in a
multi-server environment. In-Memory stores are used to store temporary data, local volatile data, or
replication of data that was retrieved from an external data source.
MCT USE ONLY. STUDENT USE PROHIBITED
1-8 Overview of Service and Cloud Technologies

.NET Technologies
Applications written in .NET Core usually access
data. .NET Core provides a variety of data access
technologies:
• System.IO contains all the infrastructure
required to access data persisted on a file
system. FileStream provides the basic
read\write operations and classes such as
FileInfo or DirectoryInfo provide the
required metadata.

• ADO.NET is the basic SQL Data-Access


technology. By using ADO.NET, it is possible
to open a connection to relational databases
and execute SQL statements and stored procedures. A stored procedure is code, usually SQL-based,
which is stored and executed on the database itself. The data is retrieved and can be saved as a
collection of rows and columns that reflect the relational model in which the data is stored in the
database. ADO.NET provides several techniques for fetching and manipulating data by using self-
managed cursors and iterators and relational and object-oriented models for storing the data in the
application memory.
• Entity Framework (EF) is an Object Relational Mapper (ORM) infrastructure. Applications use the
object-oriented approach to represent data entities and thus collections of rows and columns are not
a natural representation for a running program. Data must be converted from the relational model to
the object-oriented model. This is the role of an ORM infrastructure. EF was introduced in the .NET
Framework 3.5 and provides an infrastructure where queries are written in C#, executed against
relational databases, and produce results as collections of C# objects. At the core of the EF is a model
that represents the mapping between the relational and object-oriented representations. Entity
Framework will be discussed in Module 2, "Querying and Manipulating Data Using Entity Framework"
of Course 20487.

• ASP.NET Core introduces a powerful in-memory cache that can be used by any .NET Core application.

• Distributed cache solutions, such as Microsoft Redis Cache, are an in-memory store for .NET Core
application, which negates the memory size limitation of in-memory caches by distributing cache
objects over several servers. Using distributed cache provides scalability and enhances the durability
of cache items by saving copies of the cache items on participating nodes and by avoiding the need
to recreate the cache items on server temporary failure. Distributed cache requires cached objects to
be serializable for them to be transported to other nodes in the cache cluster.

HTTP-Based APIs
A vast variety of technologies are used to create client applications that consume data from services. This
illustrates the importance of exposing data in standard and widespread protocols such as HTTP, which
provides an easy, standard, resource-based access to data.

Storage provides both HTTP and Managed APIs to access large unstructured data objects, such as videos
and images. Azure Table Service provides a NoSQL, key-value store for storing small objects, up to 1
megabyte (MB) per entity. Objects can also be stored by using Blob Service as binary blocks of data with a
size limit of 200 gigabytes (GB) per object.

LINQ
LINQ is a C# feature used for querying in a declarative fashion. LINQ technology can be used to support
any kind of data source and provides a standard, consistent way to integrate data from different sources.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 1-9

Question: Why is it important for applications to support HTTP for data access?
MCT USE ONLY. STUDENT USE PROHIBITED
1-10 Overview of Service and Cloud Technologies

Lesson 3
Service Technologies
Services constitute a layer in application architecture, which exposes business logic capabilities to other
application components to improve component modularity and reusability.

Services are the core of distributed applications providing access to data and making it possible for users
to interact with other applications.

Services provide distributed applications with the ability to scale and meet the growing demands for
better performance, robustness, and interoperability for various consumers, whether it is a web
application, a mobile application, or even another service.

Using service as a layer for the application business logic also contributes to the maintainability and
testability of the application, therefore improving the application's quality. Separation of layers helps to
ensure the existence of Single Responsibility Principle (SRP), making it possible to test each layer as an
independent portion.

In this lesson, you will learn about services and how services are integrated into application architecture,
services technologies, and .NET services technologies.

Lesson Objectives
After completing this lesson, you will be able to:
• Describe the HTTP-based services.

• Describe the micro-services architecture.


• Describe ASP.NET Core.

HTTP-Based Services
Web service is a method of data transfer between
software components based on web technologies.

Web technologies are mostly based on plain text


data formats being sent between computers,
improving interoperability and easy integration
processes.

Web services are based on standards and use


application-level protocols to communicate with
each other. Although a variety of protocols exist,
HTTP is the main protocol used by web services.

HTTP-Based Services
HTTP is an application-layer protocol, which defines a set of characteristics for establishing request-
response communication between two networked nodes. HTTP characteristics consist of methods, which
are usually referred to as Verbs that can be performed on a remote computer, security extensions (HTTPS),
authentication, status codes conventions, and more.

HTTP-based web services are mostly used to manage resources that are a part of the HTTP paradigm,
custom structured textual resources, images, and more. Managing resources by using HTTP web services is
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 1-11

natural and is based on Uniform Resource Identifiers (URI) for resource identification and verbs for
performing operations on the selected resource.

You can use ASP.NET Core to create a rich, testable, and customizable environment for creating HTTP-
based services.
HTTP-based services are covered in Module 3, "Creating and Consuming ASP.NET Core Web APIs", Lesson
1, "HTTP Services" in Course 20487.

Definition of Micro-Services
The traditional monolith architecture has some
problems when supporting the agility needed in
modern software development, such as how they
tightly couple to the same development tool
across all development teams, difficulty in
deploying the software because all parts must be
tested before deployment, and difficulty in
implementing scalability because different parts
scale in different ways. Micro-services is a popular
modern solution to these problems.

The micro-services architecture is a method for


building applications by separating it into a
collection of loosely coupled services. These services are organized according to business capabilities and
priorities. The micro-services architecture has been used in companies such as Microsoft, Netflix,
Facebook, and Google.

Some key benefits of micro-services architecture are:

• Continuous deployment of separated services without breaking the application.


• Each service uses the technologies that fit the developer and the task.

• Fault-tolerant services.
• Crash of one service does not affect other services.
• Easy to scale.

• No long commitment to technology.


Creating services is covered in Module 3, "Creating and Consuming ASP.NET Core Web APIs", Lesson 1,
"HTTP Services" in Course 20487.

Deploying services is covered in Module 6 "Deploying and managing services" in Course 20487.

Scaling services is covered in Module 10 "Scaling services" in Course 20487.


MCT USE ONLY. STUDENT USE PROHIBITED
1-12 Overview of Service and Cloud Technologies

Introduction to ASP.NET Core


The variety of application platforms and
development technologies, along with different
operating systems and mobile devices, highlight
the need for simplicity and interoperability when
developing public or private services.

Choosing the proper set of technologies to be


used can affect business aspects in terms of
accessibility, and the ways it can be consumed by
various operating systems, mobile devices, and
development platforms.

Microsoft has released modern new technology


for creating web services called ASP.NET Core,
which will be introduced in this lesson.

ASP.NET Core
With ASP.NET Core, you can create HTTP services utilizing HTTP verbs and URIs, providing the support for
fully interoperable services that can be consumed by many platforms due to wide support across different
environments for HTTP.
Based on HTTP characteristics, the ASP.NET Core uses HTTP headers to help consumers determine the
format of data they expect to get back from the service. Single service implementation can generate
responses in the JSON format, a human-readable text-based standard, XML and other encoding formats,
without special handling on the service side.

The ASP.NET Core is covered in depth in Module 3, " Creating and Consuming ASP.NET Core Web APIs",
and Module 4, "Extending ASP.NET Core HTTP Services" of Course 20487.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 1-13

Lesson 4
Cloud Computing
Cloud computing is revolutionizing the way you develop services and applications. The on-demand model
of cloud computing provides new ways to scale and provide better availability of services.

The continuous growth of data, platforms, and users require a more robust and capacity-unlimited
platform to take on the expected load.

In this lesson, you will learn about cloud computing and its benefits, some architectural considerations for
setting up cloud computing, and the cloud computing products from Microsoft that are based on Azure.

Lesson Objectives
After completing this lesson, you will be able to:
• Explain what cloud computing is.

• Describe the benefits of cloud computing.

• Explain the difference between IaaS, PaaS, FaaS, and SaaS.


• Describe the Azure cloud platform.

• Explain how to use Azure to host your application.

• Describe the components of the Azure ecosystem.


• Explain how to create new virtual machines with Azure.

Introduction to Cloud Computing


Setting up a data center for hosting applications is
a complicated and costly task. Choosing the right
hardware, installing the operating system,
planning network components, and configuring
load balancing solutions are only a portion of the
required tasks. Maintaining the data center is also
equally complicated because it includes time-
consuming tasks such as upgrading obsolete
hardware, installing software updates, backing up
applications and data, and replacing
malfunctioning hardware.

When designing a data center, you must consider


the expected capacity, usage during peak times, and expected growth. You need to consider the fact that
a part of the data center will be idle for most of the time while still consuming electricity and needing
maintenance.

Cloud computing can handle these issues by using an on-demand approach for computing. In cloud
computing, you lease computer resources from a cloud vendor, based on your current needs.

Typically, cloud services consist of a group of servers and storage resources scattered in different physical
locations. Cloud services share resources to provide hosted application high-availability, flexibility, and
maximum utilization of hardware.
MCT USE ONLY. STUDENT USE PROHIBITED
1-14 Overview of Service and Cloud Technologies

Benefits of Cloud Computing


Setting up data centers for hosting applications
and services is often costly and requires high
maintenance.

Owning a data center introduces challenges to IT


departments in addition to the establishment
charges. Some of the challenges include:
• Supporting redundancy and high-availability
requires adding additional servers to the data
center in case of a hardware failure.

• Handling unpredicted load of incoming traffic


can cause temporary unavailability and loss of
potential customers.
• Trying to prepare for unpredicted load, which leads to adding more hardware that will be under-
utilized most of the time, thereby making the data center even less efficient.

The following illustration demonstrates the utilization of resources in hosting a service or an application
on a local data center compared to the cloud.

While cloud provisioning maintains a stable provisioning slightly above the application usage, as shown in
the preceding graph, on-premises provisioning fails to keep up with the application usage needs in two
scenarios. When the application grows rapidly, the static on-premises provisioning causes under-
provisioning. When the application usage drops, on-premises provisioning cannot scale down and causes
over-provisioning.
Cloud computing provides unlimited scaling in case of unpredicted load and enhances high-availability
and performance by taking advantage of the large capacity of available bandwidth, storage, and
computing resources.
Hosting application and services on the cloud also improves utilization of resources by using an elastic
approach. An elastic approach is the scaling out of resources to meet the growing demand when needed
and scaling down when the demand is down again. This improves flexibility and reduces operational costs.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 1-15

The following illustration shows some of the growth patterns that are common in modern applications
and can benefit from using cloud computing

Cloud computing vendors, such as Azure, also provide a wide range of features for hosted services and
applications, for data storage, caching, and more. Azure features will be covered in detail in later modules
and lessons.

Cloud Computing Strategies


Cloud computing can be customized to meet the
different needs of software vendors. When
considering a cloud-based solution, select from
the following strategies:
• Infrastructure as a Service (IaaS). With IaaS,
the cloud platform provides the ability to
create virtual machines located on a cloud,
manually manage the IT aspects of an
application, install different operating
systems, configure load-balancers, and
manage high-availability policies. The cloud
provides on-demand provisioned servers as
needed. IaaS in the fundamental building block of the cloud platform.

• Platform as a Service (PaaS). With PaaS, the cloud platform provides a ready-to-use infrastructure,
which includes an operating system, storage, databases, auto-configured load-balancer, backup,
replication and more. The software vendor can focus on creating the required database schema and
data, and deploy the application. The platform will take care of the rest, providing an on-demand
application-hosting environment that can be cloned and scales automatically.

• Function as a Service (FaaS). With FaaS, the cloud platform provides a ready-to-use platform to
develop, run, test, and deploy, without the need for managing infrastructure such as virtual machines
(VM). This technique is called "serverless" architecture and it is typically used in micro-services
applications. In this strategy, the platform handles scalability and availability.

• Software as a Service (SaaS). With SaaS, software vendors can provide their users with a ready to
use on-demand software that benefits the inherent capabilities of a cloud platform. SaaS provides
MCT USE ONLY. STUDENT USE PROHIBITED
1-16 Overview of Service and Cloud Technologies

business flexibility by enhancing the cloud platform features – such as scalability, high availability,
self-managed, backup and more.

Examples of SaaS include Outlook.com web email and Office365.


The following diagram shows the difference between the various cloud computing strategies.

Introduction to Azure
Azure is the cloud computing platform offered by
Microsoft. Azure consists of pairs of data centers
located in some key areas in North America, South
America, Europe, Australia, and Asia, including
China.

Azure data centers are physically secured,


protected from power failures, and network
outages, and designed to operate continuously,
thereby providing reliability and operational
sustainability.

Azure also provides geo-replication, which is a


disaster recovery solution that will automatically
back up data to a remote location in case of a disaster on the hosting data center, providing durability to
your applications and services.

As a complete cloud computing solution, Azure provides an on-demand, scalable, self-service computing
and storage resource platform for hosting services and applications from a wide variety of technologies,
such as .NET Core applications, Java applications, Python, PHP, and others, using SQL databases, MySQL,
hosted on Windows or Linux operating systems.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 1-17

Azure supports a wide variety of platforms and technologies making it possible to host whole solutions
and not only standalone services.

Azure also offers a set of building blocks services for managing identities, communication, and media.
Azure also includes inherent features for scalability, replication, and backup, and advanced storage types,
which will be introduced in the following modules.

Azure PaaS Compute Services


The Azure compute services such as Web Apps
feature of Azure App Service, Function Apps, and
Azure Container Services are the Microsoft
offerings for PaaS.

Web Apps
Web Apps are designed to host applications and
services. They are exposed to the internet and
their scalability and availability make them a
prime choice to host the front-end of the
application. Azure also manages all the
infrastructure such as VM, operating system and
the application stack in Web Apps. With Web
Apps, it is possible to host applications written in different platforms including .NET, PHP, Node.js and
more. Web Apps is covered in depth in Module 5, "Hosting Services." in Lesson 2, "Hosting Services in
Azure Web Apps".

Function Apps
Function Apps is designed to run code. Each function can be run by triggers such as an HTTP request, a
time trigger, a post message to a queue, an add blob to a blob storage and more. Function Apps simplify
development and deployment. The developer focuses only on writing the code and then publishing it to
Azure Function. Azure will run it when it is triggered. Since Azure handles scalability and availability, this
makes it a prime choice to build micro-services and serverless applications. Function Apps is covered in
depth in Module 5, "Hosting Services." in Lesson 4, "Implementing Serverless Services".

Container Services
Container Services is designed to host container-based applications. A container is a stand-alone package
of software that runs as an isolated process. The developer needs to package the software and publish it
to the Container Service and Azure will run it. Since Azure handles scalability and availability, this makes it
a prime choice to build micro-services application. Container Services is covered in depth in Module 5,
"Hosting Services." in Lesson 3, "Packaging Services in Containers".
MCT USE ONLY. STUDENT USE PROHIBITED
1-18 Overview of Service and Cloud Technologies

Azure Application Components


Azure consists of several components that simplify
the development of cloud services.

Storage
Due to the unpredictable nature of cloud services,
data cannot be persisted reliably on the virtual
machines. Storage is a cloud-based large-scale
data store for persisting data in the cloud. Storage
provides an HTTP API for storing data objects.
Data objects can be stored into three available
types of storage: Blob storage, Table storage, and
Microsoft Azure Queue storage.

There are four types of storage services in Azure:


• Blob storage. This type of storage is a non-structured collection of objects that can be accessed by
using a resource identifier and can be used for storing files, such as images, videos, large texts, and
other non-structured data.

• Table storage. This type of storage is a semi-structured collection of objects that can have fields but
cannot have relations between objects. The fields are not bound to a schema structure, and different
objects can have different fields within the same collection. Table storage also provides a queryable
API access to find objects.

• Microsoft Azure Files. This type of storage is file-based storage. It supports Network File System (NFS)
and HTTP-based access without mounting and can be accessed from multiple clients at the same
time.

• Queue storage. This type of storage provides a persistent messaging queue.

Storage is covered in depth in Module 7, "Implementing data storage in Microsoft Azure" in Course
20487.

Microsoft Azure Service Bus


Integrating hybrid services (running on-premises and in the cloud) can be challenging because
connecting systems across network boundaries in a secure and reliable manner is not an easy task. Service
Bus provides a messaging infrastructure to exchange messages in applications, between the cloud or
outside it.

Microsoft Azure Active Directory


Azure AD is an authentication, authorization, and identity management infrastructure.

Azure AD has single sign-on (SSO) access to Office 365 and many third-party applications, such as
Dropbox and Salesforce.

Azure AD supports two-factor authentication, self-service password management, device registration,


Role-Based Access Control (RBAC), and many monitoring, security, and auditing capabilities.

If you have Active Directory installed on-premises, you can easily integrate it with Azure AD to provide
better connectivity while not on-site, with seamless integration with current identity management policies.

Azure AD is covered in depth in Module 9, " Securing services on-premises and in Microsoft Azure" in
Course 20487.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 1-19

Microsoft Azure Redis Cache


Azure Redis Cache provides a low latency and high throughput store for data objects on a local or
distributed shared in-memory cache with the availability Service Level Agreement (SLA) of 99.9%. Azure
Redis Cache improves performance by reducing load from the database and helping applications to scale
up to 530 GB with support for persistence.

Azure Redis Cache simplifies migrating for applications that use on-premises in-memory or distrusted
cache solutions. You can also use Azure Redis Cache to replace the session state and output cache
provider of ASP.NET.

Azure Redis Cache is covered in depth in Module 10, "Scaling Services."

Microsoft Azure Content Delivery Network


Content Delivery Network is a network of servers that are located around the world and provide static
content caching. Content Delivery Network servers can store videos, images, and other static data that can
be accessed by users who are geographically close. Having scattered Content Delivery Network data
centers helps reduce the latency and therefore, improves application performance. For example, a user
from Asia can use cloud services that are hosted in the North American data center for browsing
metadata for movies but will be redirected to Content Delivery Network servers geographically closer to
download the movie or watch the movie online.

Content Delivery Network is covered in depth in Module 10, "Scaling Services" in Course 20487.

SQL Database
SQL Database is a cloud-based relational database as a service, based on Microsoft SQL Server
technologies. SQL Database is fully scalable and provides high-availability access, support for SQL
Reporting, and enables data replication between cloud and on-premises databases.

The Mobile Apps Feature of Microsoft Azure App Service


Mobile Apps is a platform for creating scalable back-end services for applications that support storing and
accessing structured data, authentication, and push-notification without implementing the server-side
code.

Mobile Apps are beyond the scope of this course.

Azure IaaS
Hosting applications and services on the Azure
platform is easy and the power and productivity
of the Azure PaaS infrastructure enable you to
meet most of the requirements of common
services. Sometimes, you might need more fine-
grained control to host applications and services
on different operating systems such as Linux, or to
host multi-server environments.
Azure IaaS provides a platform for hosting a
custom virtual machine on the cloud you with
better control over the hosting servers by. This
makes it possible to manage every aspect of the
desired solution, starting from the operating system, virtual network configuration, complicated software
pre-installation requirements, and local disk persistency.
MCT USE ONLY. STUDENT USE PROHIBITED
1-20 Overview of Service and Cloud Technologies

Azure provides a set of operating system images to choose from while creating a virtual machine, which
can include Linux distributions and partners’ solutions. You can also create a custom virtual machine on-
premises, upload, and then deploy it to the cloud. Azure provides various ways to host all kind of software
and services.

You can migrate currently deployed applications by uploading a whole solution consisting of multiple
machines to the cloud for seamless continuation. Downloading virtual machines from Azure to be hosted
on-premises is supported as well.
Microsoft Azure Virtual Machines uses Virtual Hard Disks (VHDs) that are stored on Storage. By storing the
VHDs in Storage, you get durability, because the disks are replicated to three copies and are saved on two
different data centers.

Azure provides an API for deployment and management capabilities, both in PowerShell cmdlets (scripts),
and programmatically using HTTP API, making it possible to create custom management tools integrated
into any software solution.
Question: When would you choose IaaS over PaaS?

Demonstration: Exploring the Microsoft Azure Portal


In this demonstration, you will open the Azure portal, create a new cloud service, and configure it by
using the portal.

Demonstration Steps
You will find the steps in the “Demonstration: Exploring the Microsoft Azure Portal“ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD01_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 1-21

Lab: Exploring the Work Environment


Scenario
In this lab, you will explore several frameworks and platforms, such as Entity Framework Core, Microsoft
ASP.NET Core Web API, and Microsoft Azure, which are used for creating distributed applications.

Objectives
After completing this lab, you will be able to:

• Create an entity data model by using Entity Framework Core.


• Create an ASP.NET Core Web API service.

• Create an Azure SQL database.

• Deploy a web application to an Azure website.

Lab Setup
Estimated Time: 30 minutes

You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD01_LAB_MANUAL.md.

You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD01_LAK.md.

Exercise 1: Creating an ASP.NET Core Project


Scenario
In this exercise, you will create a Web API through ASP.NET Core by using the command prompt.

Exercise 2: Creating a Simple Entity Framework Model


Scenario
In this exercise, you will create data model classes to represent flights, implement a DbContext -derived
class, and create a new repository class for the Flight entity.

Exercise 3: Creating a Web API Class


Scenario
Implement the flight service by using ASP.NET Core Web API. Start by creating a new ASP.NET Web API
controller, and implement CRUD functionality by using the POST and GET HTTP methods.

Exercise 4: Deploying the Web Application to Azure


Scenario
In this exercise, you will create an Azure web app and a SQL database to host the ASP.NET Core Web API
application.
MCT USE ONLY. STUDENT USE PROHIBITED
1-22 Overview of Service and Cloud Technologies

Module Review and Takeaways


In this module, you have been introduced to the characteristics of distributed applications and the
benefits that can be provided by using distributed architecture. You became familiar with databases and
data access technologies. You also learned about service technologies and different services approaches
and the considerations for choosing service technology. Lastly, you have been introduced to cloud
computing concepts and Azure.

Best Practices
• Plan your application architecture to be appropriate with the technical requirements while
understanding the limitations of distributed architecture.

• Choose the database technology that will let you scale according to your application usage –
combine different approaches when appropriate (Relational DB, NoSQL)
• Think of your consumers while choosing service technology. Use HTTP services for high-compatibility
and resource-based communications.
• Describe your software deployment and configuration with details before choosing Cloud Computing
strategy (IaaS, PaaS)

Review Question
Question: What are the key benefits of micro-services architecture?
MCT USE ONLY. STUDENT USE PROHIBITED
2-1

Module 2
Querying and Manipulating Data Using Entity Framework
Core
Contents:
Module Overview 2-1

Lesson 1: ADO.NET Overview 2-2


Lesson 2: Creating an Entity Data Model 2-6

Lesson 3: Querying Data 2-18


Lab A: Creating a Data Access Layer using Entity Framework 2-23
Lesson 4: Manipulating Data 2-24

Lab B: Manipulating Data 2-34


Module Review and Takeaways 2-35

Module Overview
Typically, all applications store some data in a database. Some examples of data include configuration
settings, application data, user information, documents, and many others.

The .NET Framework provides a set of tools that helps you access and manipulate data that is stored in a
database. In this module, you will learn about the Entity Framework Core data model, and about how to
create, read, update, and delete data. Entity Framework Core is a rich object-relational mapper, which
provides a convenient and powerful application programming interface (API) to manipulate data.

This module focuses on the Code First approach with Entity Framework Core.

Objectives
After completing this module, you will be able to:

• Describe basic objects in ADO.NET and explain how asynchronous operations work.
• Create an Entity Framework Core data model.

• Query data by using Entity Framework Core.

• Insert, delete, and update entities by using Entity Framework Core.


MCT USE ONLY. STUDENT USE PROHIBITED
2-2 Querying and Manipulating Data Using Entity Framework Core

Lesson 1
ADO.NET Overview
ADO.NET is the original low-level data access API in the .NET Framework. Although this module does not
focus on ADO.NET, understanding basic objects and operations from the ADO.NET library is essential for
using higher-level approaches, such as Entity Framework Core.

This lesson describes fundamental ADO.NET operations and its asynchronous support.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe the basic objects in ADO.NET.

• Use asynchronous database operations with ADO.NET.

ADO.NET Basic Objects


ADO.NET is the basic data access API in.NET Core.
It contains the SQL data provider that support
most free and commercial SQL databases
available today. A data provider is responsible for
implementing database-specific protocols and
features, and at the same time presenting a
consistent API so that replacing the application's
data provider does not involve many code
changes.

Use System.Data.SqlClient for connecting to


Microsoft SQL Server databases and Microsoft
Azure SQL databases.

To connect to other databases, you can often find third-party data providers online, or you can
implement your own data provider.

For more information about ADO.NET Data Providers, see


https://aka.ms/moc-20487D-m2-pg1

The rest of this topic focuses on fundamental ADO.NET concepts and classes. Each data provider has its
own classes, which implement a set of common interfaces.

Connection
Use the ADO.NET connection object to connect to your database. The type of ADO.NET connection object
that implements the IDbConnection interface is SqlConnection.

A connection object is responsible for connecting to the database and initiating additional operations,
such as executing commands or managing transactions. Typically, you create a connection object with a
connection string, which is a locator for your database and may contain connection-related settings, such
as authentication credentials and timeout settings.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-3

Command
Use the ADO.NET command object to send commands to the database. Commands can either return
data, such as the result of a select query or a stored procedure, or have no data returned, such as when
you use an insert or delete statement, or a Data Definition Language (DDL) query. The type of ADO.NET
command object that implements the IDbCommand interface is SqlCommand.

A command object can represent a single command or a set of commands. Query commands return a set
of results, as a DataReader object or a DataSet object, or a single value, usually the result of an
aggregated action, such as a row count, or calculation of an average.

DataReader
Use the ADO.NET data reader to dynamically iterate a result set obtained from the database. If you use a
data reader to access data, you must maintain a live connection while you read from the database.
Additionally, data readers can only move forward while iterating the data. This data-access strategy is also
referred to as the connected architecture. The types of ADO.NET data reader object that implements the
IDataReader interface is SqlDataReader.

The following code example demonstrates how to query a database with a data reader.

Querying a database with a data reader


var connection = new
SqlConnection("Server=myServer;Database=StudentsDatabase;Trusted_Connection=True;");
using (connection)
{
var command = new SqlCommand("SELECT StudentID, StudentName FROM Students",
connection);
connection.Open();

SqlDataReader reader = command.ExecuteReader();

if (reader.HasRows)
{
while (reader.Read())
{
Console.WriteLine("{0}\t{1}",
reader.GetInt32(0),
reader.GetString(1));
}
}
else
{
Console.WriteLine("No data found.");
}
reader.Close();
}

When using a data reader, you can access only one database record at a time, as shown in the preceding
example. If you need multiple records at once, it is your responsibility to store them as you move along to
the next record. Although this seems like a major inconvenience, data readers are very efficient in terms of
memory utilization, because they do not require the entire result set to be fetched into memory.

DataAdapter
Use the ADO.NET data adapter to load a result set obtained from a database into the memory. After
loading the entire result set and caching it in the memory, you can access any of its rows, unlike the data
reader, which only provides forward iteration. You should use this data-access strategy, referred to as the
disconnected architecture, when you do not want to maintain a live connection to the database while
processing the data.
MCT USE ONLY. STUDENT USE PROHIBITED
2-4 Querying and Manipulating Data Using Entity Framework Core

Data adapters store the results in a tabular format. You can also change the data after it is loaded and use
the data adapter to apply the changes back to the database. The type of ADO.NET data adapter object
that implements the IDataAdapter interface is SqlDataAdapter.

Although data adapters are convenient to use (especially in conjunction with the DataSet class, which is
explained in the next section), they impose a larger overhead than data readers because the entire result
set must be fetched into memory before you can perform any operations.

DataSet
The DataSet class is one of the most frequently used objects in ADO.NET. You use it to retrieve tabular
data from a database. Although you can fill a DataSet object manually with data, you typically load it by
using the DataAdapter class.

The following code example demonstrates how to load data to a DataSet object by using a data adapter.

Loading data into a DataSet object with the SqlDataAdapter class


string query = "SELECT * FROM Students WHERE StudentName LIKE 'a%'";
var connection = new SqlConnection(connectionString);
var adapter = new SqlDataAdapter(query, connection);
var data = new DataSet();
adapter.Fill(data);

You can use DataSet objects to hold information from more than one table at one time and maintain
relationships between tables inside a DataSet object.
Question: Why would you prefer using data readers to data adapters, and vice versa?

Asynchronous Operations with ADO.NET


Database operations can take a long time to
complete. For example, using a very complex
query, or running a complex stored procedure can
take a considerable amount of time. When
executing long-running queries in desktop
applications, it is common to run them in a
background thread, to prevent the UI thread from
becoming unresponsive. However, in server-side
applications, such as web applications or web
services, having too many managed threads
waiting for a database operation to complete can
adversely affect the performance of the
application. With ADO.NET, you can use asynchronous operations to execute long-running queries
without creating and blocking a managed thread. When the database returns results, one of your threads
continues execution, and you can execute your business logic for the returned data.

To execute a command asynchronously, you use the ExecuteXXAsync methods. For example, the
ExecuteReaderAsync is the asynchronous version of the ExecuteReader method. The asynchronous
methods return a Task<T> object, where the generic type parameter T is the type returned by the
corresponding synchronous method. For example, the ExecuteReaderAsync method returns a
Task<DbDataReader> object, whereas the corresponding synchronous method, ExecuteReader, returns
a DbDataReader object.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-5

This code example demonstrates how to use an asynchronous data reader.

Using an asynchronous data reader


using (var connection = new SqlConnection(connectionString))
{
await connection.OpenAsync();
using (var command = new SqlCommand(commandString, connection))
{
using (SqlDataReader reader = await command.ExecuteReaderAsync())
{
while (await reader.ReadAsync())
{
Console.WriteLine("{0}\t{1}",
reader.GetInt32(0),
reader.GetString(1));
}
}
}
}

In addition to the ExecuteXXAsync methods, you can also use the DbConnection.OpenAsync method
to open a database connection asynchronously. You can also use the DbDataReader.ReadAsync method,
as shown in the preceding example, to advance the reader asynchronously to the next row.

Note: The code in the preceding example uses the await keyword introduced in C# 5 to
schedule a continuation when the operation completes. You can also use the
Task.ContinueWith method to provide a delegate as the continuation of the task.

For additional examples of Asynchronous Programming in ADO.NET, see


http://go.microsoft.com/fwlink/?LinkID=298749&clcid=0x409
MCT USE ONLY. STUDENT USE PROHIBITED
2-6 Querying and Manipulating Data Using Entity Framework Core

Lesson 2
Creating an Entity Data Model
This module describes how to create an Entity Framework Core model. You will learn about the Code First
approach for accessing data with Entity Framework Core.

Lesson Objectives
After completing this lesson, you will be able to:

• Explain the need for object-relational mappers (ORMs).


• Describe the ORM Code First development approach.

• Create an Entity Framework Core data context.

• Map classes to tables with data annotations.


• Map class properties to database foreign keys.

• Map type hierarchies with inheritance to database tables.

• Map classes to tables by using the Entity Framework Core Fluent API.

The Need for Object Relational Mappers


When you use the ADO.NET classes to interact
with your database, your application becomes
strongly coupled to the database. With ADO.NET,
you implement most of your data access as plain
text SQL statements or call stored procedures
implemented on the database server in some SQL
dialect. This approach is fragile and error-prone.
Additionally, it is not flexible. Changing the
database schema or database type altogether
might require considerable modifications to the
code of the application.

ORMs simplify your interaction of the application


with data and present an abstraction that maps application objects to database records. By accessing and
modifying objects, you modify the corresponding database records. Entity Framework Core keeps track of
the changes you make to the entities and only persists the changes to the database when you call the
save method. This introduces an abstraction layer between the database schema and the code of your
application, which makes your application more flexible. Additionally, because you access objects by using
strongly-typed code and query them with LINQ, you no longer need rely on SQL statements. This makes
your code more robust.
Entity Framework Core is an ORM that provides a one-stop solution to interact with data that is stored in
a database. Instead of writing stored procedures and plain text SQL statements, you work with your own
domain classes, and you do not have to parse the results from a tabular structure to an object structure.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-7

Create entity types with the Code First approach


The general approach of Entity Framework Core is
to create the data access layer (DAL) of the
application is Code-first. Entity Framework Core
can either create a new database, according to the
data model, or use an existing database.

Code-First
In this approach, the domain model is simply a set
of classes with properties that you provide.

In the code-first approach, Entity Framework Core


scans your domain classes and their properties,
and tries to map them to the database based on
naming conventions. Tables are named in the plural form of your class name and columns should have
names identical to those of your class properties. For example, for a class named Car with a property
named Model, its mapped table will be named Cars and it will have a column named Model. There are
several other conventions used by Entity Framework Core. For example, if the class has a property named
Id, it will be assigned as the table’s primary key column. If you need to customize these mappings, you
can use special data annotation attributes or the Fluent API. These customization options will be discussed
later in this module.

You can use the code-first approach both with new databases, and with existing ones. If you do not have
a database, the default behavior of code-first will be to create the database for you the first time you run
your application. If your database already exists, Entity Framework Core will connect to it and use the
defined mappings between your model classes and the existing database tables.

For more information on ADO.NET Entity Framework, see


http://go.microsoft.com/fwlink/?LinkID=298750&clcid=0x409

For more information on Entity Framework Core Development Workflows, see


https://aka.ms/moc-20487D-m2-pg2

Creating a DbContext and writing queries


In Entity Framework Core, a context is how you
access the database, without the need for
additional wrappers or abstractions. Context is the
glue between your domain model (classes) and
the underlying framework that connects to the
database and maps object operations to database
commands.

The DbContext class is an Entity Framework Core


type of context object that streamlines many of
the common tasks you must perform with the
context. This module focuses on the DbContext
class, and explains how it:
MCT USE ONLY. STUDENT USE PROHIBITED
2-8 Querying and Manipulating Data Using Entity Framework Core

• Handles the database generation for Entity Framework Core Code First.

• Provides basic create, read, update, and delete (CRUD) operations, and simplifies the code that you
must write to execute these operations.
• Handles the opening and closing of database connections.

• Provides a change tracking mechanism.

Initializing the DbContext Class


The DbContext class constructor accepts the name of a database and creates a connection string by
using SQL Express or LocalDb. If both SQL Express and LocalDb are installed, the DbContext class uses
SQL Express.

Note: SQL Express is the free, lightweight version of SQL Server that can be installed on
development machines and ships with Visual Studio. LocalDb is an extension of SQL Express that
offers an easier way to create multiple database instances by using SQL Express. LocalDb ships
with Visual Studio 2017.

You can use a different database (that is not SQL Express or LocalDb) by providing a connection string in
your application configuration file (app.config or web.config). If you pass the name of that connection
string to the DbContext class constructor, it will use the connection string instead of the default database
engine.
The following code demonstrates how to put a connection string in your application configuration file,
and how to use it when creating an instance of the DbContext class.

DbContext with named connection strings


XML
<configuration>
<connectionStrings>
<add name="StudentsDB"
connectionString="Data Source=Students.sdf" />
</connectionStrings>
</configuration>

C#
DbContext context = new DbContext("StudentsDB");

Deriving from DbContext


Often, you will find it convenient to create your own class that derives from the DbContext class and
provides some helper methods or properties. It is common to derive from the DbContext class and
provide a property of type DbSet<T> for each entity type that is mapped to your database schema.
The following example demonstrates how to create a custom class that derives from the DbContext class.

Deriving from the DbContext class


public class StudentsContext : DbContext
{
public StudentsContext() : base("StudentsDB") { }
public DbSet<Student> Students { get; set; }
public DbSet<Course> Courses { get; set; }
}

When you create an instance of the StudentsContext class depicted in the preceding code example,
Entity Framework Core will connect to the database and map the Students and Courses properties
according to the mapping information provided by the Student and Course classes.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-9

Note: If you do not pass a database name or connection string name to the DbContext
class constructor, it will use the fully-qualified name of your custom DbContext-derived class as
the database name. For example, if the StudentsContext class depicted in the preceding code
example were in the StudentsManagement namespace, the database name would be
StudentsManagement.StudentsContext.

After you initialize the DbContext object, you can use it to perform CRUD operations on the database by
using the domain model classes you authored. You will learn how to perform CRUD operations in Lesson
3, "Querying Data", and Lesson 4, "Manipulating Data", and learn how to map domain classes to database
tables later in this lesson.

The following example illustrates how to query the database by using the DbContext class, retrieve a set
of objects, manipulate them, and save the results back to the database.

Querying and manipulating the database by using the DbContext class


using (DbContext context = new DbContext("StudentsDB"))
{
Student student = context.Students.Find("Daniel");
student.GraduationDate = DateTime.Now;
context.SaveChanges();
}

In the preceding code example, the context.Students property returns an instance of the DbSet<T>
generic class. The DbSet<T> generic class represents a set of entities that you can use to perform CRUD
operations. You can think of it as the object representation of a database table. This class provides the
Find method, which can locate an object based on the database primary key. The example concludes by
calling the SaveChanges method of the DbContext class, which propagates the changes to the database.

Note: It is very important to keep the number of concurrent DbContext objects in your
application low. Each object can open a connection to the database and keep it open for some
time. Too many open connections can cause performance issues, both in your application and
your database. When declaring an instance of the DbContext object, use a using statement. This
will ensure that the database connection is closed and that any in-memory caches for objects you
recently queried are purged from memory.

Change Tracking
When you query the database and retrieve objects by using Entity Framework Core, the DbContext class
can track changes you make to these objects to facilitate saving them back into the database easily. The
Entity Framework Core change tracking system supports two modes of operation:

• Active change tracking. Every property informs the context if it was changed.

• Passive change tracking. The context attempts to detect changes before it determines which
property to save.
MCT USE ONLY. STUDENT USE PROHIBITED
2-10 Querying and Manipulating Data Using Entity Framework Core

When you call the SaveChanges method of the DbContext class, the context checks if active change
tracking is enabled. If only passive change tracking is available, the DbContext object calls the
DetectChanges method. This method enumerates all entities retrieved by the context and compares
every property of every entity to the original value it had when it was retrieved. Any changed properties
are updated in the database.

To support active change tracking, you should mark all your properties on your domain classes (such as
the Student class in the preceding code example) with the virtual keyword. If you do so, Entity
Framework Core will create proxies at run time that derive from your class and track assignments to the
virtual properties of your model.

DbInitializer – Initializing a new database with content


When the DbContext object is initialized, it
detects whether the target database already exists.
If the database does not exist, Entity Framework
Core can create the database based on the
information in your DbContext-derived class. To
create the database, you can use the
EnsureCreated method.
The following example illustrates how to create
the database, if it does not already exist, by using
the EnsureCreated method and a custom
DbContext-derived class.

Creating the database if it doesn’t exist


public static class DbInitializer
{
public static void Initialize(ProductContext context)
{
context.Database.EnsureCreated();
// Code to create initial data
}
}

using (var context = new MyContext(options))


{
DbInitializer. Initialize(context);
}

Updating Databases with Code First Migrations


If your database was created by DbContext and you decided later to change something in your domain
model classes, Entity Framework Core will not update the database automatically. You might encounter
exceptions while running queries or saving your changes to the database.

You can use Code First Migrations to update the database schema automatically to match the changes
you made in your classes without having to recreate the database.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-11

With Code First Migrations, you define the initial state of your classes and your database. After you
change your classes and run the Code First Migrations in design time, the set of changes you performed
over your classes is translated to the required migration steps for the database, and then those steps are
generated as database instructions in code. You can apply the changes to the database in design-time
before deploying the version of the application. Alternately, you can have the application run the
migration code after it starts. Code First Migrations is outside the scope of this course, but you can read
more about it on MSDN:

For more information on Code First Migrations, see


https://aka.ms/moc-20487D-m2-pg3

Mapping Classes to Tables with Data Annotations


By default, Entity Framework Core code-first uses
conventions to name tables and columns. In
addition, Entity Framework Core has conventions
for identifying which property should be used as a
primary key, and how to name foreign key
columns.

If you already have an existing database, and the


tables or columns are not named according to the
convention, you will need to manually map the
tables and columns to classes and properties. For
example, your table names might use underscore
(_) to separate words and have a prefix of T_ for
table names, such as T_Order_Details. If you do not yet have a database, but the database administrators
have their own convention for naming tables and columns, you will need to follow their convention, and
manually set the names for the new tables and columns that will be generated by Entity Framework Core.
If you need to map classes and properties manually to database schema objects such as tables, columns,
and keys, you can do so by using data annotation attributes. You can also use these attributes to specify
validation rules for your domain classes. Validation rules are outside the scope of this module. To use data
annotation attributes, add a reference to the System.ComponentModel.DataAnnotations assembly.

To map a class to a database table, add the [Table] attribute to the class declaration and specify the table
name. For example, [Table("Products")] maps the class to the Products table. To map a property to a
database column, add the [Column] attribute to the property declaration. For example,
[Column("ProductName")] maps the property to the ProductName column.

Note: By default, Entity Framework Core will use the plural form of the class name when
mapping a class to a database table. For example, the class Product will be mapped to a table
named Products), and properties will be mapped to database columns of the same name. You
should use the [Table] and [Column] attributes only if you want to customize these defaults.

The following example shows how to map a class to a database table by using code-first data annotations.

Mapping a table by using data annotations


using System.ComponentModel.DataAnnotations;
using System.ComponentModel.DataAnnotations.Schema;

[Table("GlobalProducts")]
MCT USE ONLY. STUDENT USE PROHIBITED
2-12 Querying and Manipulating Data Using Entity Framework Core

public class Product


{
public int Id { get; set; }

[Column("ProductName")]
public string Name { get; set; }
}

In the preceding code example, the Product class is mapped to a database table named GlobalProducts,
the Id property is mapped implicitly to a database column named Id, and the Name property is mapped
to a database column named ProductName.

The Id property in the preceding code example will be set as the primary key of the table, because the
convention for primary key is that either the property is named Id (or ID, the casing is ignored) or named
after the class, followed by Id, for example, ProductID.

When you map a property to a primary key column, by default, Entity Framework Core will set the value
of the column to be generated by the database automatically. For integer columns, the value will be auto-
incremented; for columns of type GUID, the database will generate a new GUID for each row. If you do
not want to use generated primary keys, and instead you want to provide the primary key value yourself
when creating the entity object, configure the primary key property with the
[DatabaseGenerated(DatabaseGeneratedOption.None)] attribute. To use the DatabaseGenerated
attribute, add a using directive to the System.ComponentModel.DataAnnotations.Schema
namespace.

Mapping Properties to Foreign Keys


Tables in a relational database cross-reference each other by using foreign keys. Logically, foreign keys
represent one-to-many, many-to-one, or many-to-many relationships. For example, a table representing
orders may have a foreign key referencing a table of products. This relationship is many-to-many, as each
order may reference many different products, and each product may feature in many different orders.
As explained in the previous topic, you can map class properties to table columns by using data
annotations. You use data annotations to add foreign keys and to map object relationships between
instances of your classes to the database relationships. Specifically, you define two properties to express
the foreign key relationship: a foreign key property, whose type matches the database type of the foreign
key column, and an entity property, whose type is a class from your domain model.

You can map a foreign key relationship in two ways by using data annotations:
• From the foreign key property to the entity property.

• From the entity property to the foreign key property.

The following code example shows how to set a foreign key of a nested object to a property of your class
by using two approaches.

Mapping foreign keys by using data annotations


//From the foreign key property to the entity property
[ForeignKey("Course")]
public Guid CourseId { get; set; }
public Course { get; set; }

//From the entity property to the foreign key property


public Guid CourseId { get; set; }
[ForeignKey("CourseId")]
public Course Course { get; set; }
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-13

Note: The preceding code example illustrates a to-one relationship (either one-to-one or
many-to-one) from the enclosing entity to the Course entity. To specify a to-many relationship
(either one-to-many, or many-to-many), change the type of the entity property to
ICollection<T> or IEnumerable<T>.

By having both the foreign key property and entity property for each foreign key relationship, you gain
flexibility. If necessary, you can ask Entity Framework Core to fetch the referenced entity (as shown in the
Course class in the preceding code example) along with the enclosing entity, or you can refrain from
fetching it and rely only on its key, for performance reasons.

Note: If you do not use data annotations, and instead rely on the Code First convention for
foreign keys, you must make sure that the foreign key property is named as the entity property,
followed by Id (casing is ignored). In the preceding example, the entity property is named
Course and the foreign key property is named CourseId, therefore the data annotation
attributes are not required.

For more information on Code First Conventions, see


https://aka.ms/moc-20487D-m2-pg4

Demonstration: Creating an Entity Type, DbContext, and DbInitializer


In this demonstration, you will use Entity Framework Core to connect with an SQL server and use
Operations Studio to query the data.

Demonstration Steps
You will find the steps in the “Demonstration: Creating an Entity Type, DbContext, and DbInitializer“
section on the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-
Azure-and-Web-Services/blob/master/Instructions/20487D_MOD02_DEMO.md.

Mapping Classes to Tables with the Fluent API


The Fluent API is a code-based declaration for
database mapping. It provides an efficient way to
map your whole database in one single file. If you
use the Fluent API, your domain classes are not
cluttered with data annotation attributes. In fact,
your domain classes can be declared in an
assembly that does not reference the Entity
Framework Core at all.
MCT USE ONLY. STUDENT USE PROHIBITED
2-14 Querying and Manipulating Data Using Entity Framework Core

There are two ways to use the Fluent API:

• Override the OnModelCreating method of your DbContext-derived class.


• Implement the IEntityTypeConfiguration interface for each mapped class.

You can use the Fluent API by overriding the OnModelCreating method of your DbContext class. The
OnModelCreating method gives you access to a ModelBuilder object, which you use to declare the
association between your domain classes and the database tables, columns, and keys.

The following code example shows how to map a class to a database table, then map the key field of the
class, and then map a property of the class to a database column.

Fluent API with the OnModelCreating method


protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<Product>().ToTable("GlobalProducts");
modelBuilder.Entity<Product>().HasKey(c => c.Id);
modelBuilder.Entity<Product>().Property(c =>
c.Name).HasColumnName("ProductName");
}

In the preceding code example, the ModelBuilder object is used to map the Product class to the
GlobalProducts table to declare that its Id property is the primary key and to associate the Name
property with the ProductName database column. This achieves the same result as the data annotations
example illustrated in Topic 4, "Mapping Classes to Tables with Data Annotations".

You can also use the Fluent API by using a class that implements the IEntityTypeConfiguration interface
for each domain class you have. You still need to associate the configuration classes with your
DbContext-derived class by using the OnModelCreating method.

The following code example illustrates how to use the Fluent API with a class implements the
IEntityTypeConfiguration interface.

Fluent API with the IEntityTypeConfiguration interface


//In the DbContext-derived class:
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.ApplyConfigurations(new ProductMapping());
}

public class ProductMapping : IEntityTypeConfiguration<Product>


{
public void Configure(EntityTypeBuilder<Product> builder)
{
builder.ToTable("GlobalProducts");
builder.HasKey(t => t.Id);
builder.Property(t => t.Id).HasColumnName("Id");
builder.Property(t => t.Name).HasColumnName("ProductName");
}
}

The ProductMapping class in the preceding example implement the IEntityTypeConfiguration generic
interface, and calls numerous methods in the Configure method to associate the Product class with the
GlobalProducts table. This again achieves the same result as using data annotations.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-15

For additional examples of Configuring/Mapping Properties and Types with the Fluent API,
see http://go.microsoft.com/fwlink/?LinkID=313730
Mapping Type Inheritance to Tables
When you work in an object-oriented
environment, you can use inheritance to reflect
real-world relationships. When you work with an
ORM, the inheritance relationships should hold
when you map objects to database tables.

Entity Framework provides three approaches to


represent inheritance:

• TPT (Table per type)

• TPH (Table per hierarchy)

• TPC (Table per concrete type)

In the examples in this topic, you will see how to implement inheritance for the base class Person and two
inheriting classes: Student and Teacher.

TPT
In the TPT approach, a separate table represents each class. The derived class’ table has a foreign key
property that associates it with the base class’ table. The derived class’ table contains columns only for
properties declared in that class.

This image describes the TPT representation in the database.

FIGURE 2.1: ENTITY FRAMEWORK TABLE PER TYPE (TPT)


As you can see in the diagram, a table represents the Person class, and an additional table represents
each inherited type, that adds more columns and contains a foreign key to the parent table.

To create such an object-relational mapping, use data annotations to give each class a different table
name.
MCT USE ONLY. STUDENT USE PROHIBITED
2-16 Querying and Manipulating Data Using Entity Framework Core

TPH
In the TPH approach, a single table represents the entire inheritance hierarchy. All the inherited types are
represented in the same table. When you map the table to domain classes (such as the Teacher and
Student classes), you only map the relevant properties for each class. This means that the database
representation of a Teacher object will have a null value for the Grade column, which only the Student
class has.

This image describes the TPH representation in the database.

FIGURE 2.2: ENTITY FRAMEWORK TABLE PER


HIERARCHY (TPH)
As you can see in the diagram, the database table holds all the properties without differentiating between
inherited types, all types are represented in a single table, and the differences are configured in the
mapping definition.

To create such an object-relational mapping, use data annotations to give all classes the same table name.
You can also remove the [Table] attribute from the classes, because this is the default behavior of Code
First for handling inheritance mapping.

Note: When creating the Person table, Entity Framework Code First will add a
discriminator column to the table and use the type names (Person, Student, and Teacher) to
indicate which object type is stored in each row. You need not be aware of the discriminator
column or use it directly.

TPC
In the TPC type approach, each concrete (non-abstract) class is represented in the database as its own
table. As a result, the database schema is not normalized, but mapping the tables to classes is much
easier.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-17

This image describes the TPC type representation in the database.

FIGURE 2.3: ENTITY FRAMEWORK TABLE PER CONCRETE TYPE (TPC)


To create such an object-relational mapping, make sure that your context class does not have a
DbSet<T> property for the abstract type. You can also use data annotations to set the table name for
each of the derived classes.

This example shows how to implement inheritance by using the TPT approach. The code defines three
classes named Person, Student, and Teacher. Student and Teacher inherit from Person, and every class
is mapped to a different database table.

TPT example
public class MyDbContext : DbContext
{
public DbSet<Person> Persons { get; set; }
public DbSet<Student> Students { get; set; }
public DbSet<Teacher> Teachers { get; set; }
}

[Table("Person")]
public abstract class Person
{
public int Id { get; set; }
public string Name { get; set; }
public DateTime DateOfBirth { get; set; }
}

[Table("Student")]
public class Student : Person
{
public int Grade { get; set; }
}

[Table("Teacher")]
public class Teacher : Person
{
public decimal Salary { get; set; }
}

Question: Why would you use the Fluent API as opposed to data annotations?
MCT USE ONLY. STUDENT USE PROHIBITED
2-18 Querying and Manipulating Data Using Entity Framework Core

Lesson 3
Querying Data
So far, you learned how to map domain classes in your application to database tables. This lesson explains
how to query data from a database by using SQL and Entity Framework Core.

Lesson Objectives
After completing this lesson, you will be able to:

• Query data by using LINQ to Entities.


• Query data by using Entity SQL.

• Query data by using direct SQL statements.

• Configure lazy and eager entity loading.

Query the Database by Using LINQ to Entities


Language Integrated Query (LINQ) is used to
query collections and objects in the .NET
Framework. LINQ to Entities provides a LINQ
wrapper to query your database and retrieve data
by using your domain classes. LINQ to Entities
resembles standard LINQ to Objects, but has a few
differences:
• LINQ to Entities requires a DbContext object
to communicate with a database.

• LINQ to Objects queries return an object


implementing the IEnumerable<T>
interface, whereas LINQ to Entities queries
return an object implementing the IQueryable<T> interface that extends the IEnumerable<T>
interface. The IQueryable<T> interface extends IEnumerable<T> by containing an expression tree
object, which represents the query you wrote. Entity Framework Core uses the expression tree to
translate your LINQ query to an SQL query.

• LINQ to Objects queries execute in memory on a collection of items, whereas LINQ to Entities queries
are translated to SQL statements and executed in the database.

Note: Every LINQ to Entities query is translated to SQL statements and executed at the
database level as a plain SQL statement by using ADO.NET. This is extremely important for
performance reasons. Executing a LINQ to Objects query on a table with millions of records
requires fetching the entire table into memory, whereas executing a LINQ to Entities query on the
same table can be extremely fast because the query executes on the database server.

This example shows how to retrieve a list of students from the database and filter it by the name of the
student. The context variable is a reference to a custom DbContext-derived class instance, and its
Students property returns a reference to a DbSet<Student> object.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-19

Querying data by using LINQ to Entities


var studentsQuery = from s in context.Students
where s.Name.ToLower().Contains("a")
select s;

There are some limitations as to which operators and methods you can use in your LINQ to Entities
queries. Because every LINQ to Entities query is translated to SQL and executed on the database server,
some LINQ features and.NET Core methods are not supported by Entity Framework Core. For example,
you cannot use the String.IsNullOrWhiteSpace method and the Last LINQ query operator.

Best Practice: As with LINQ to Objects, queries written with LINQ to Entities are not
executed until they are enumerated, for example, by using foreach, or by calling the ToList or
FirstOrDefault extension methods. If you enumerate a LINQ to Entities query for the second
time, it will execute again in the database. For example, if you invoke the Count method of the
query several times, each invocation will execute the SQL statement again in the database.
Therefore, as a best practice, if you need to use the result of the query more than once, you
should store the result in a local variable.

For more information on LINQ to Entities, see


http://go.microsoft.com/fwlink/?LinkID=298752&clcid=0x409

Demonstration: Using Language-Integrated Query (LINQ) to Entities


In the following demonstration, you will create a DbContext object and use it to query a database by
using LINQ to Entities.

Demonstration Steps
You will find the steps in the “Demonstration: Using Language-Integrated Query (LINQ) to Entities"
section on the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-
Azure-and-Web-Services/blob/master/Instructions/20487D_MOD02_DEMO.md.

Query the Database by Using Direct SQL Statements


SQL statements run on the database level. Your
application can issue SQL statements or execute a
stored procedure in the database. You should
issue SQL statements directly only if you cannot
express your query by using LINQ to Entities or
Entity SQL. To use an SQL query statement from
Entity Framework Core, use the FromSql<T>
generic method of the DbSet class. Note that the
SQL query should use the database table names,
and not the names of your domain classes. The
difference between executing SQL statements
directly with ADO.NET and executing them with
MCT USE ONLY. STUDENT USE PROHIBITED
2-20 Querying and Manipulating Data Using Entity Framework Core

Entity Framework Core is that with Entity Framework Core, the result is automatically translated to the
domain classes, instead of returning a DbDataReader object.

The following code example demonstrates how to execute an SQL query statement with Entity Framework
Core to retrieve objects from the database.

Retrieving objects by using Direct SQL


string sql = "select * from Products where Price > 5000";
var products= context.Products.FromSql<Product>(sql);

Finally, you can also execute SQL statements that return a single value or no value at all. For example, you
could execute an insert statement to insert a new entity into the database or execute a stored procedure
that returns a scalar value. To execute an SQL statement that returns a scalar value, use the
ExecuteSqlCommand method.

The following example demonstrates how to execute an SQL statement that returns a scalar value by
using the ExecuteSqlCommand method.

Executing an SQL statement that returns a scalar value


using (var context = new StudentsContext()) //derived from DbContext
{
context.Database.ExecuteSqlCommand(
"update Students set GraduationYear = 2016 where GraduationYear = 2015");
//ExecuteSqlCommand returns the number of records updated, in case this information is
necessary
}

For more information on Transact-SQL Reference, see


http://go.microsoft.com/fwlink/?LinkID=298754&clcid=0x409

Question: Why would you use Entity SQL or direct SQL instead of LINQ to Entities?

Demonstration: Running Stored Procedures with Entity Framework


In this demonstration, you will use Entity Framework Core to execute a stored procedure and retrieve
structured data from its execution.

Demonstration Steps
You will find the steps in the “Demonstration: Running Stored Procedures with Entity Framework“ section
on the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/blob/master/Instructions/20487D_MOD02_DEMO.md.
Question: When would you invoke stored procedures from your application instead of
performing object manipulations by using Entity Framework Core?
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-21

Load Entities by Using Lazy and Eager Loading


When you work with large databases, some
queries can take longer than others. Lazy loading
and eager loading refer to the number of round
trips Entity Framework Core makes to load the
data from the database.

When using lazy loading, only the top level of the


data is returned, and the nested levels are
retrieved on demand. For example, if a Student
entity has a Courses entity property referring to a
list of courses in which the student is enrolled,
only the top-level Student entity information will
be fetched, and the Courses property will not be
fetched. With lazy loading, each round trip to the database returns a portion of the data, making queries
faster and minimizing the amount of memory required to store the results. Furthermore, the queries tend
to be simpler because they do not require joins across multiple database tables.

When using eager loading, Entity Framework Core returns the entire data set in one big round trip to the
database. Eager loading might take longer than multiple small round trips that return only part of the
result, depending on the complexity of the large query. Lazy loading is introduced in Entity Framework
Core 2.1 and must be configured manually.
The following code example demonstrates how to configure lazy loading in the Configure method by
using UseLazyLoadingProxies method

Configure lazy loading


class MyContext : DbContext
{
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder
.UseLazyLoadingProxies()

.UseSqlServer(ConfigurationManager.ConnectionStrings["BloggingDatabase"].ConnectionString
);

base.OnConfiguring(optionsBuilder);
}
}

When issuing a query, call the Include method to specify which entities should be eagerly loaded with the
containing entity. This is the most flexible way to instruct Entity Framework Core when you want to use
eager loading and is recommended.

The following code example demonstrates how to use eager loading with the Include method to retrieve
the property contents of the Courses entity along with the Student entity.

Eager loading with include


using (var context = new StudentsContext())
{
var studentsWithCourses = context.Students.Include(s => s.Courses).ToList();
//This example could also use Include("Courses") instead of the lambda expression
}

To enable lazy loading of your related entities, you need to declare your relationship properties, which
contain references to other entities, as virtual. If you reference a list of related entities, your virtual
MCT USE ONLY. STUDENT USE PROHIBITED
2-22 Querying and Manipulating Data Using Entity Framework Core

property must be of type ICollection<T> or a derivative of it, such as IList<T>. You cannot use lazy
loading with IEnumerable<T>. By setting the properties to virtual, you ensure that Entity Framework
Core derives a new proxy class from the original class and adds the lazy load logic to the property.

If you have non-virtual properties, you can explicitly load them at run time by using the Load method.
The following code example shows how to load a non-virtual referenced entity explicitly.

Explicitly loading a non-virtual referenced entity


context.Entry(student).Collection(s=> s.Courses).Load(); // Load a referenced collection
of entities
context.Entry(student).Reference(s=> s.Department).Load(); // Load a referenced entity

In the preceding example, the Entry method returns a DbEntityEntry object, which you can use to access
information about the entity type, such as its original values and its state such as unmodified, deleted, and
so on. The DbEntityEntry provides information about the referenced entities and collections through
which you can explicitly load each relation. Similar to the Include method, the Collection and Reference
methods can also use a string parameter instead of the lambda expression.

If you have defined your reference and collection properties as virtual, and you want at some point to
momentarily turn off lazy loading on an entire context, set the LazyLoadingEnabled property of the
DbContext instance to false.
The following code example shows how to turn off lazy loading for the entire context. The context
variable refers to a DbContext object.

Turning off lazy loading


context.ChangeTracker.LazyLoadingEnabled = false;

For more information about Loading Related Data, see


http://go.microsoft.com/fwlink/?LinkID=313731
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-23

Lab A: Creating a Data Access Layer using Entity


Framework
Scenario
In this lab you will use Entity Framework Core to connect to an SQL Database.

Objectives
After completing this lab, you will be able to:

• Create a DAL layer


• Create an entity data model by using Entity Framework Core

Lab Setup
Estimated Time: 30 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD02_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD02_LAK.md.

Exercise 1: Creating a Data Model


Scenario
In this exercise, you will create the data access layer and connect to the database by using Entity
Framework Core to perform CRUD operations on the SQL Express database.

Exercise 2: Query your Database


Scenario
In this exercise. you will use the DAL class library to create a new console application to display all the
data from the database.
MCT USE ONLY. STUDENT USE PROHIBITED
2-24 Querying and Manipulating Data Using Entity Framework Core

Lesson 4
Manipulating Data
Until this point, you learned how to query data from a database by using LINQ to Entities, Entity SQL, and
even direct SQL statements. However, querying data is not the whole story. This lesson explains how to
manipulate data by using Entity Framework Core.

Lesson Objectives
After you complete this lesson, you will be able to:

• Enable change tracking for entities.


• Insert an entity into a database.

• Delete an entity from a database.


• Update an entity in a database.

• Use transactions with Entity Framework Core.

• Use Entity Framework Core with third-party database.


• Use repository design pattern.

Change Tracking with Entity Framework Core


Entity Framework Core can track domain objects
that you retrieve from the database. Entity
Framework Core uses change tracking so that
when you call the SaveChanges method on the
DbContext object, it can synchronize your
updates with the database. You can check the
status of any object (such as whether it was
modified), inspect the history of your changes,
and undo changes if you see fit.

The DbContext object records the state of the


entity as it was when you retrieved it from the
database. The domain object itself contains the
current state of the entity. The DbContext can then determine whether the entity is:

• Added. The entity was added to the context and did not exist in the database.

• Modified. The entity was changed since it was retrieved from the database.

• Unchanged. The entity was not changed since it was retrieved from the database.

• Detached The entity was detached from the context, so that changes to it will not be reflected in the
database.
• Deleted. The entity was deleted since it was retrieved from the database.

You can inspect the state of all the entities that have been changed in some way by using the
DbContext.ChangeTracker.Entries method. This could be useful for logging purposes or for reverting
certain changes in an overridden implementation of the SaveChanges method of the DbContext class.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-25

The following code example demonstrates how you can enumerate all the objects that have been added,
modified, or deleted in an overriding implementation of the SaveChanges method.

Enumerating changes to objects


public class StudentsContext : DbContext
{
public override int SaveChanges(bool acceptAllChangesOnSuccess)
{
var changes = this.ChangeTracker.Entries().Where(entry => entry.State !=
EntityState.Unchanged);
foreach (var change in changes)
{
var entity = change.Entity;
//Inspect the object, the change, and possibly introduce additional changes
}
return base.SaveChanges(acceptAllChangesOnSuccess);
}
}

Furthermore, from an instance of the DbContext class you can retrieve and modify state information for
any entity that has been loaded into the context by using the Entry method. One use of this would be to
mark an entity as deleted; another use would be to replace the values of an entity with new values
provided externally to your API.

The following code example illustrates how you can modify state information for an entity and how you
can copy the values from one entity to another.

Modifying entity state


using (var context = new StudentsContext())
{
//Delete the student’s school:
Student student = context.Students.Find("Dave Barnett");
context.Entry(student.School).State = EntityState.Deleted;

//Copy student values over from another object:


context.Entry(student).CurrentValues.SetValues(otherStudent);
}

Finally, you can turn change tracking on and off globally by using the AutoDetectChangesEnabled
property of the ChangeTracker property of the DbContext class.
The following code example shows how you can turn change tracking on and off.

Turning change tracking off


DbContext.ChangeTracker.AutoDetectChangesEnabled = false;

If you use the preceding code to turn off automatic change tracking, you will have to call the
DbContext.ChangeTracker.DetectChanges method manually before you save any changes.

Note: Automatic change tracking is enabled by default, but only applies to properties
marked as virtual. Non-virtual properties cannot be derived and therefore Entity Framework Core
cannot detect when the property's value changes.
MCT USE ONLY. STUDENT USE PROHIBITED
2-26 Querying and Manipulating Data Using Entity Framework Core

Inserting New Entities


To add an entity to the database, you use the
DbContext object. When you use the DbContext
object to add a new entity to a database, the
context marks the change tracking status of the
entity as Added. When you call the SaveChanges
method, the DbContext object adds the entity to
the database. No changes are applied to the data
until you call the SaveChanges method.

The following code example shows how to add a


new entity to a database by using the DbContext
object. The Persons property of the
MyDbContext class is of type DbSet<Person>.

Adding an entity
using (var context = new MyDbContext())
{
context.Persons.Add(
new Person
{
DateOfBirth = new DateTime(1978, 7, 11),
Name = "John Doe"
});
context.SaveChanges();
}

Deleting Entities
To delete an entity from the database, you use the
DbContext object. When you delete an entity
from a database, the context marks the change
tracking status of the entity as Deleted. When you
call the SaveChanges method, the DbContext
object deletes the entity from the database.

The following code example shows how to delete


an entity from a database by using the
DbContext object. The Products property of the
ProductsContext class is of type
DbSet<Product>.

Deleting an entity
using (var ctx = new ProductsContext())
{
var product = (from m in ctx.Products where m.Name == "Orange Juice" select
m).Single();
ctx.Products.Remove(product);
ctx.SaveChanges();
}
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-27

If you already know the primary key of the entity that you want to delete, you do not need to retrieve it
from the database to delete it. You can manually add an entity with the desired primary key to the
context, use the Entry method of the DbContext to access the state of the entity, and then mark it as
deleted.

The following code example shows how to delete an entity from a database without first retrieving it from
the database.

Deleting an entity without first retrieving it from the database


using (var ctx = new ProductsContext())
{
var product = new Product {Id = 72};
ctx.Products.Add(product);
ctx.Entry(product).State = EntityState.Deleted;
ctx.SaveChanges();
}

Updating Entities
To update an entity in the database, you can use
the DbContext object and make changes in an
incremental fashion. When you update an entity,
the context marks the change tracking status of
the entity as Modified. When you call the
SaveChanges method, the DbContext object
updates the entity in the database. The exact
procedure of how these incremental updates are
performed depends on the change tracking status.

If active change tracking is enabled, the


DbContext object uses information maintained
internally to determine which columns must be
updated. If only passive change tracking is available, the DbContext object invokes the DetectChanges
method, which compares the entity to the snapshot that was taken when it was retrieved from the
database. In both cases, the DbContext object executes an SQL statement that updates only the columns
that were changed.
The following code example shows how to retrieve and update an entity by using the DbContext object.

Updating an entity
using (var context = new MyDbContext())
{
var student = (from s in context.Students where s.Name.ToLower().Contains("john")
select s).Single();
student.Name = "Jonathan";
context.SaveChanges();
}

You can update an entity that is not tracked by the context, such as an entity you received as a method
parameter, by attaching the entity to the context, and then manually setting the entity's state to
Modified.
MCT USE ONLY. STUDENT USE PROHIBITED
2-28 Querying and Manipulating Data Using Entity Framework Core

Note: Updating a detached entity is a common scenario when working with services, because the
updated entity is sent to the service and not loaded from the context.

The following code example shows how to update an entity that is not tracked by the context.

Updating a non-tracked entity


using (var context = new MyDbContext())
{
context.Entry(updatedStudent).State = EntityState.Modified;
context.SaveChanges();
}

The preceding code uses the Entry method to attach the updatedStudent object to the context, and
then sets the entity's state to Modified. When the context tries to save the attached entity, it cannot
detect which properties were changed, because it does not know the original values of the properties.
Therefore, in this scenario, the SQL statement will update all the columns, even those that have not
changed.

If you are not sure whether the entity you want to update is already tracked or not by the context you are
using, such as when you receive the context as a parameter, do not use the Entry method. If your context
already tracks an instance of an entity, and you call the Entry method with a different instance of the
same entity, an exception will be thrown because the context cannot track two instances of the same
entity. If you do not know whether an entity is tracked or not, you have two options:
1. Use the Find method to load the entity to the context, and then use the
DbEntityEntry<T>.CurrentValues.SetValues method to update the loaded entity with the values of
the updated entity instance. The Find method will first search the context for the entity and if not
found, will load the entity from the database.

2. Search only the entities already loaded by the context for the entity to update, by using the Local
property of the DbSet. If it is found, use the DbEntityEntry<T>.CurrentValues.SetValues method
to update the entity according to the values of the updated entity. If it is not found, use the Entry
method to attach the entity to the context, and then set its state to Modified. By using the Local
property, you can avoid accessing the database if the entity is not found in the context.
This example shows the two ways to update a detached entity if you do not know whether the context
already tracks the entity or not.

Updating a detached entity with an existing context


// Option 1
var originalStudent = context.Students.Find(updatedStudent.StudentId);
context.Entry(originalStudent).CurrentValues.SetValues(updatedStudent);
context.SaveChanges();

// Option 2
var existingStudent = context.Students.Local.FirstOrDefault(r => r.StudentId ==
updatedStudent.StudentId);
if (existingStudent == null)
{
context.Entry(originalStudent).State = EntityState.Modified;
}
else
{
context.Entry(existingStudent).CurrentValues.SetValues(updatedStudent);
}
context.SaveChanges();
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-29

For more information about Add/Attach and Entity States, see


http://go.microsoft.com/fwlink/?LinkID=313732

Demonstration: CRUD Operations in Entity Framework


Entity Framework Core provides CRUD operations. This keeps those operations simple and helps maintain
readable code. In this demonstration, you will see how to perform CRUD operations with Entity
Framework Core.

Demonstration Steps
You will find the steps in the “Demonstration: CRUD Operations in Entity Framework“ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD02_DEMO.md.
Question: How do you create or modify a relationship (based on a foreign key) by using
Entity Framework Core?

Entity Framework Core Transactions


When you perform several operations on a
database, such as inserting an entity, updating the
properties of another entity, and deleting a third
one, handling individual failures along the way is a
daunting task. If the third of three operations fail,
you must dedicate considerable programming
resources to cleaning up the effects of the first
two operations; and even these cleanup
operations can fail, in turn. Furthermore, if every
operation becomes visible to other threads or
processes as soon as it is performed, the cleanup
process may affect the entire application or even
other applications accessing the same database.

For example, when you insert an order of a customer into the database, it may consist of multiple update
and insert operations. You might have to insert a record into the Orders table, a record into the Shipping
table, and modify the Inventory table to reflect the inventory changes because of fulfilling the order. If
any of these updates fail—for instance, if the Inventory table update fails because the item is no longer
available in stock—you need to carefully roll back the changes to the Orders and Shipping tables to
make sure you do not have an orphaned order that cannot be fulfilled. Similarly, if the Inventory table
update succeeds but an error occurs while inserting a record into the Shipping table, you must undo the
change in the Inventory table to make sure you do not lose inventory items. To aggravate the matter,
any updates you performed to the Inventory table may have been made visible to other applications, so
another process may have decided that an item is no longer in stock although your order has not been
successfully fulfilled.
MCT USE ONLY. STUDENT USE PROHIBITED
2-30 Querying and Manipulating Data Using Entity Framework Core

Transactions
Transactions address the compensation and visibility issues by providing a scope of operations. A
transaction is a set of operations that runs in a sequence, and if one of the operations fails, the transaction
rolls back, and no operations are committed. You should use transactions if one operation depends on a
previous operation and cannot be committed without verifying that the previous operation was
successful. Also, you should use transactions when visibility is a concern, and you do not want to make a
change visible to other applications until the entire transaction completes.

By default, Entity Framework Core is transactional. When you call the SaveChanges method, it translates
the change set to SQL statements and starts with the BEGIN TRANSACTION SQL declaration. The SQL
transaction is not committed unless all the items are added, updated, or deleted successfully.

The BeginTransaction Method


To use multiple transactions in Entity Framework Core, you use the BeginTransaction method before you
perform the updates. Call the Commit method when you know all the processes have completed
successfully. The transaction only commits if you call the Commit method. If the Commit method is not
called, all the nested transactions will roll back when the transaction scope closes, even if each individual
transaction completed successfully.

The following code example shows how to use the BeginTransaction method with Entity Framework
Core.

Using the BeginTransaction method


using (var ctx1 = new DataContext1())
{
using (var transaction = ctx1.Database.BeginTransaction())
{
// Update an entity
ctx1.SaveChanges();

// Update an entity
ctx1.SaveChanges();

// Update an entity
ctx1.SaveChanges();

transaction.Commit();
}
}

In the preceding code example, the changes made by the three SaveChanges method calls are
committed to the database (or databases) only when the BeginTransaction block ends, and only because
the entire scope was marked as complete by calling the Commit method.

For more information about transaction see the link


https://aka.ms/moc-20487D-m2-pg5
Best Practice: Use the BeginTransaction method inside a using block to make sure that it
is disposed. If the object is disposed of before you call the Commit method (for example, if an
exception occurs within the using block), the transaction is aborted automatically and its changes
are rolled back.

Question: When should you use transactions and distributed transactions?


MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-31

Testing Entity Framework Core with In-Memory database


Entity Framework Core enables testing the code
without the need to install SQL Server by using
the in-memory provider.

The in-memory provider is a more general-


purpose database for testing than a relational
database, therefore some code will work on the
in-memory provider, but will fail in the SQL Server
provider. For example, the default value will not
be applied but for most testing use-cases, it will
work.

The following code example shows how to declare


the MyContext class that is derived from the
DbContext class and provides two constructors; the first constructor is used in the production
environment and gets the configuration from the OnConfigure method, the second constructor is used
in unit testing and uses the in-memory provider by using the UseInMemoryDatabase method.

Using the in-memory provider


class MyContext : DbContext
{
public DbSet<Product> Products { get; set; }

public MyContext()
{

public MyContext(DbContextOptions<MyContext> options)


: base(options)
{

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)


{
if (!optionsBuilder.IsConfigured)
{
optionsBuilder.UseSqlServer(connectionString);
}
}
}

var options = new DbContextOptionsBuilder<MyContext>()


.UseInMemoryDatabase(databaseName: "TestDatabase")
.Options;

using (var context = new MyContext(options))


{

Demonstration: Using Entity Framework with In-Memory Database


In this demonstration we will create a DbContextOptions instance, configure it with
DbOptionsBuilder and use the static method UseInMemoryDatabase for its creation.
MCT USE ONLY. STUDENT USE PROHIBITED
2-32 Querying and Manipulating Data Using Entity Framework Core

Demonstration Steps
You will find the steps in the “Demonstration: Using Entity Framework with In-Memory Database“ section
on the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/blob/master/Instructions/20487D_MOD02_DEMO.md.

Using Entity Framework Core with third-party databases


Entity Framework Core is designed to work with
SQL Server and NoSQL databases by using
providers such as in-memory, which was discussed
in the previous topic.

Available providers:

• SQL Server

• SQLite
• In-memory

• PostgreSQL

• MySQL

• MyCat

• FireBird

The provider for Cosmos DB will be available soon.

Entity Framework Core Database Providers


https://aka.ms/moc-20487D-m2-pg6

Demonstration: Using Entity Framework with SQLite


In this demonstration we are switching between in-memory database to SQLite database

Demonstration Steps
You will find the steps in the “Demonstration: Using Entity Framework with SQLite“ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD02_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-33

Repository pattern
Entity Framework Core uses the DbContext class
to implement the Unit Of Work design pattern.
This pattern aggregates changes and commits
them to the database by using the SaveChanges
method. The DbContext class can be used
directly in the code. In some cases in a micro-
services architecture you will want to use the
Repository design pattern. To implement the
Repository design pattern, wrap the DbContext
class with another class. The Repository design
pattern has several benefits:

• Isolates the Entity Framework Core from the


service layer.

• Allows to mock repositories to simulate access to the database.

• Allows to apply caching.

• Creates a more maintainable and readable code.


The following code shows how to implement the repository pattern

Repository pattern
public class StudentRepository : IStudentRepository
{
private StudentContext context;
public StudentRepository(StudentContext context)
{
this.context = context;
}

public void SaveStudent(Student student)


{
context.Students.Add(student);
context.SaveChanges();
}

public Student GetStudent(long id)


{
return context.Students.SingleOrDefault(s => s.Id == id);
}

public void DeleteStudent(long id)


{
Student student = GetStudent(id);
studentEntity.Remove(student);
context.SaveChanges();
}

public void UpdateStudent(Student student)


{
context.SaveChanges();
}
}
MCT USE ONLY. STUDENT USE PROHIBITED
2-34 Querying and Manipulating Data Using Entity Framework Core

Lab B: Manipulating Data


Scenario
In this lab, you will create a repository with CRUD methods and inject two kinds of database
configurations to work with SQL Express and SQLite.

Objectives
After completing this lab, you will be able to:

• Create a hotel booking repository and populate it with CRUD methods


• Test the queries with SQL Express and SQLite databases

Lab Setup
Estimated Time: 30 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD02_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD02_LAK.md.

Exercise 1: Create Repository Methods


Scenario
In this exercise, you will create the hotel booking repository.

Exercise 2: Test the Model Using SQL Server and SQLite


Scenario
In this exercise, you will inject SQLite DB to the repository, and you will create, test and run them on
SQLite and SQL Server.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 2-35

Module Review and Takeaways


In this module, you learned how to use Entity Framework Core to implement a data access layer for your
application. First, you learned about fundamental ADO.NET concepts, such as connection, command, and
data reader. Next, you created database models by using the Entity Framework Code First approach, and
mapped classes to database tables and columns with data annotations, such as the [Table] and [Column]
attributes, and the Entity Framework Core Fluent API. Then, you learned how to query the database with
LINQ to Entities, Entity SQL, and raw SQL statements when necessary, and how to use lazy loading to
improve application performance and memory utilization. Finally, you learned how to use change tracking
with the DbContext class when inserting, deleting, and updating entities, and how to protect several
database operations as an atomic unit by using transactions.

Best Practices
• Always use transactions when performing multiple operations that depend on each other, and may
require compensation when they fail in isolation.
• Prefer using LINQ to Entities and not Entity SQL or raw SQL to query the database. This makes your
code less fragile and easier to refactor.

• Beware of lazy loading behavior when you return an entity to a higher layer in your application. If the
DbContext object is disposed and the entity has not been fully loaded, accessing its nested
properties may cause an exception.

• Use the Entity Framework Core Fluent API (instead of data annotations) when you map an existing
object model to a database, and when the object model should not change as a result of the
mapping.

Review Question
Question: Why should you use Entity Framework Core and not direct database manipulation
with SQL statements in ADO.NET?

Tools
• Visual Studio 2017
• SQL Server 2017 Express

• SQL Management Studio


MCT USE ONLY. STUDENT USE PROHIBITED
2-36 Querying and Manipulating Data Using Entity Framework Core
MCT USE ONLY. STUDENT USE PROHIBITED
3-1

Module 3
Creating and Consuming ASP.NET Core Web APIs
Contents:
Module Overview 3-1

Lesson 1: HTTP Services 3-2

Lesson 2: Creating an ASP.NET Core Web API 3-13


Lesson 3: Consuming ASP.NET Core Web APIs 3-20

Lab: Creating an ASP.NET Core Web API 3-25


Lesson 4: Handling HTTP Requests and Responses 3-26
Lesson 5: Automatically Generating HTTP Requests and Responses 3-30

Module Review and Takeaways 3-44

Module Overview
ASP.NET Core Web API provides a robust and modern framework for creating Hypertext Transfer Protocol
(HTTP)-based services. In this module, you will be introduced to the HTTP-based services. You will learn
how HTTP works and become familiar with HTTP messages, HTTP methods, status codes, and headers.
You will also be introduced to the Representational State Transfer (REST) architectural style and
hypermedia.

You will learn how to create HTTP-based services by using ASP.NET Core Web API. You will also learn how
to consume them from various clients. After Lesson 3, in the lab "Creating an ASP.NET Core Web APIs",
you will create a web API and consume it from a client.

Objectives
After you complete this module, you will be able to:

• Design services by using the HTTP protocol.


• Create services by using ASP.NET Core Web API.

• Use the HttpRequest/IActionResult classes to control HTTP messages.

• Consume ASP.NET Core Web API services.


MCT USE ONLY. STUDENT USE PROHIBITED
3-2 Creating and Consuming ASP.NET Core Web APIs

Lesson 1
HTTP Services
HTTP is a communication protocol that was created by Tim Berners-Lee and his team while working on
the WorldWideWeb (later renamed to World Wide Web) project. Originally designed to transfer
hypertext-based resources across computer networks, HTTP is an application layer protocol that acts as
the primary protocol for many applications including the World Wide Web.

Because of its vast adoption and the common use of web technologies, HTTP is now one of the most
popular protocols for building applications and services. In this lesson, you will be introduced to the basic
structure of HTTP messages and understand the basic principles of the REST architectural approach.

Lesson Objectives
After you complete this lesson, you will be able to:
• Explain the basic structure of HTTP.

• Explain the structure of HTTP messages.

• Describe resources by using URIs.


• Explain the semantics of HTTP verbs.

• Explain how status codes are used.

• Explain the basic concepts of REST.


• Use media types.

Introduction to HTTP
HTTP is a first-class application protocol that was
built to power the World Wide Web. To support
such a challenge, HTTP was built to allow
applications to scale, taking into consideration
concepts such as caching and stateless
architecture. Today, HTTP is supported by many
different devices and platforms, reaching most
computer systems available today.
HTTP also offers simplicity, by using text messages
and following the request-response messaging
pattern. HTTP differs from most application layer
protocols because it was not designed as a
Remote Procedure Calls (RPC) mechanism or a Remote Method Invocation (RMI) mechanism. Instead,
HTTP provides semantics for retrieving and changing resources that can be accessed directly by using an
address.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-3

HTTP Messages
HTTP is a simple request-response protocol. All
HTTP messages contain the following elements:

• Start-line

• Headers

• An empty line

• Body (optional)

Although requests and responses share the same


basic structure, there are some differences
between them of which you should be aware.

Request Messages
Request messages are sent by the client to the server. Request messages have a specific structure based
on the general structure of the HTTP messages.

This example shows a simple HTTP request message.

An HTTP request
GET http://localhost:4392/travelers/1 HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
Accept-Language: en-US,en;q=0.7,he;q=0.3
User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)
Accept-Encoding: gzip, deflate
Host: localhost:4392
DNT: 1
Connection: Keep-Alive

The first and the most distinct difference between the request and response messages is the structure of
the start-line, called request-lines.

Request-line
This HTTP request messages start-line has a typical request-line with the following space-delimited parts:

• HTTP method. This HTTP request message uses the GET method, which indicates that the client is
trying to retrieve a resource. Verbs will be covered in-depth in the topic Using Verbs later in this
lesson.

• Request URI. This part represents the URI to which the message is being sent.

• HTTP version. This part indicates that the message uses HTTP version 1.1.

Headers
This request message also has several headers that provide metadata for the request. Although headers
exist in both response and request messages, some headers are used exclusively by one of them. For
example, the Accept header is used in requests to communicate the kinds of responses the clients would
prefer to receive. This header is a part of a process known as content negotiation that will be discussed
later in this module.

Body
The request message has no body. This is typical of requests that use the GET method.
MCT USE ONLY. STUDENT USE PROHIBITED
3-4 Creating and Consuming ASP.NET Core Web APIs

Response Messages
Response messages also have a specific structure based on the general structure of HTTP messages.

This example shows a simple HTTP response message.

The HTTP response returned by the server for the above request
HTTP/1.1 200 OK
Server: ASP.NET Development Server/11.0.0.0
Date: Tue, 13 Nov 2012 18:05:11 GMT
X-AspNet-Version: 4.0.30319
Cache-Control: no-cache
Pragma: no-cache
Expires: -1
Content-Type: application/json; charset=utf-8
Content-Length: 188
Connection: Close

{"TravelerId":1,"TravelerUserIdentity":"aaabbbccc","FirstName":"FirstName1","LastName":"L
astName1","MobilePhone":"555-555-5555","HomeAddress":"One microsoft
road","Passport":"AB123456789"}

Status-Line
HTTP response start-lines are called status-lines. This HTTP response message has a typical status-line with
the following space-delimited parts:

• HTTP version. This part indicates that the message uses HTTP version 1.1.

• Status-Code. Status-codes help define the result of the request. This message returns a status-code
of 200, which indicates a successful operation. Status codes will be covered in-depth later in this
lesson.
• Reason-Phrase. A reason-phrase is a short text that describes the status code, providing a human-
readable version of the status code.

Headers
Like the request message, the response message also has headers. Some headers are unique for HTTP
responses. For example, the Server header provides technical information about the server software being
used. The Cache-Control and Pragma headers describe how caching mechanisms should treat the
message.
Other headers, such as the Content-Type and Content-Length, provide metadata for the message body
and are used in both requests and responses that have a body.

Body
A response message returns a representation of a resource in JavaScript Object Notation (JSON). The
JSON, in this case, contains information about a specific traveler in a travel management system. The
format of the representation is communicated by using the Content-Type header describing what is
known as media type. Media types are covered in-depth later in this lesson.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-5

Identifying Resources by Using URI


Uniform Resource Identifier (URI) is an addressing
standard that is used by many protocols. HTTP
uses URI as part of its resource-based approach to
identify resources over the network.

HTTP URIs follow this structure:

"http://" host [ ":" port ] [ absolute path [ "?" query


]]
• http://. This prefix is standard to HTTP
requests and defines the HTTP URI schema to
be used.

• Host. The host component of the URI


identifies a computer by an IP address or a registered name.

• Port (optional). The port defines a specific port to be addressed. If not present, a default port will be
used. Different schemas can define different default ports. The default port for HTTP is 80.
• Absolute path (optional). The path provides additional data that together with the query describes
a resource. The path can have a hierarchical structure like a directory structure, separated by the slash
sign (/).

• Query (optional). The query provides additional nonhierarchical data that together with the path
describes a resource.
Different URIs can be used to describe different resources. For example, the following URIs describe
different destinations in an airline booking system:

• http://localhost/destinations/seattle

• http://localhost/destinations/london
When accessing each URI, a different set of data, also known as a representation, will be retrieved.

The URI Request For Comments (RFC 3986)


http://go.microsoft.com/fwlink/?LinkID=298757&clcid=0x409

Using Verbs
HTTP defines a set of methods or verbs that add
an action like semantics to requests. HTTP 1.1
defines an extensible set of eight methods, each
with a different behavior. For example, the
following request uses the GET method to retrieve
information about a specific traveler in an airline
traveler system.

This example shows an HTTP GET request


message.
MCT USE ONLY. STUDENT USE PROHIBITED
3-6 Creating and Consuming ASP.NET Core Web APIs

An HTTP GET request retrieving data about a specific traveler


GET http://localhost:4392/travelers/1 HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
Accept-Language: en-US,en;q=0.7,he;q=0.3
User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)
Accept-Encoding: gzip, deflate
Host: localhost:4392
DNT: 1
Connection: Keep-Alive

In the above example, a method is defined in the first segment of the request-line and communicates
what the request is intended to perform. For example, the GET method used in the request above
communicates that the request is intending to retrieve data about an entity and not trying to modify it.
This behavior makes GET compatible with both properties an HTTP method might have: it is both safe and
idempotent.

• Safe verbs. These are verbs that are intended to not have any side effects on the resource state by
the server other than retrieving data.
• Idempotent verbs. These are verbs that are intended to have the same effect on the resource state
when the same request is sent to the server multiple times. For example, sending a single DELETE
request to delete a resource should have the same effect as sending the same DELETE request
multiple times.

Verbs are a central mechanism in HTTP and one of the mechanisms that make HTTP a powerful protocol.
Understanding what each verb does is very important for developing HTTP-based services. The following
verbs are defined in HTTP 1.1:

Method Description Properties Usage

GET Requests intended to Safe, Used to retrieve a representation of a


retrieve data based on the Idempotent resource.
request URI.

HEAD Requests intended to have Safe, Used to check request validity and
the identical result of GET Idempotent retrieving headers information without
requests but without having the message body.
returning a message body.

OPTIONS Requests intended to Safe, Used to retrieve a comma-delimited


return information about Idempotent list of the HTTP verbs supported by a
the communication options resource or a server in the Allow
and capabilities of the header.
server.

POST Requests intended to send Used to create, update, and by some


an entity to the server. The protocols, retrieve entities from the
actual operation that is server. POST is the least structured
performed by the request is HTTP method.
determined by the server.
The server should return
information about the
outcome of the operation
in the result.

PUT Requests intended to store Idempotent Used to create and update resources.
the entity sent in the
request URI, completely
overriding any existing
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-7

Method Description Properties Usage


entity in that URI.

DELETE Requests intended to Idempotent Used to delete resources.


delete the entity identified
by the request URI.

TRACE Requests intended to Safe, Rarely implemented, used to identify


indicate to clients what is Idempotent proxies the message passes on the
received at the server end. way to the server.

CONNECT Requests intended to Safe, Used to start SSL tunneling.


dynamically change the Idempotent
communication protocol.

For more information about HTTP methods, refer to the HTTP 1.1 Request For Comments (RFC 2616).

Methods definition in the HTTP 1.1 Request For Comments (RFC 2616)
http://go.microsoft.com/fwlink/?LinkID=298758&clcid=0x409

Status-Codes and Reason-Phrases


Status-codes are three-digit integers returned as a
part of response message’s status-lines. Status
codes describe the result of the effort of the server
to satisfy the request. The next section of the
status line, after the status code, is the reason-
phrase, a human-readable textual description of
the status code.

Status codes are divided into five classes or


categories. The first digit of the status code
indicates the class of the status:

Class Usage Examples

1xx – Codes that return • 101 Switching Protocols


Informational informational response about
the state of the connection.

2xx – Codes that indicate the • 200 OK


Successful request was successfully
• 201 Created
received and accepted by the
server.

3xx – Codes that indicate that • 301 Moved Permanently


Redirection additional action should be
• 302 Found
taken by the client (usually in
respect to different network • 303 See Other
addresses) to achieve the
result that you want.
MCT USE ONLY. STUDENT USE PROHIBITED
3-8 Creating and Consuming ASP.NET Core Web APIs

Class Usage Examples

4xx – Client Codes that indicate an error • 400 Bad Request


Error that is caused by the client’s
• 401 Unauthorized
request. This might be caused
by a wrong address, bad • 404 Not Found
message format, or any kind
of invalid data passed in the
client’s request.

5xx – Server Codes that indicate an error • 500 Internal Server


Error that was caused by the server
• 505 HTTP Version Not Supported
while it tried to process a
seemingly valid request.

For more information about HTTP status codes, refer the HTTP 1.1 Request For Comments (RFC 2616).

HTTP Status-Codes definition in the HTTP 1.1 Request For Comments (RFC 2616)
http://go.microsoft.com/fwlink/?LinkID=298759&clcid=0x409

Introduction to REST
Until now in this module, you have learned how
HTTP acts as an application layer protocol. HTTP is
used to develop both websites and services.
Services developed by using HTTP are generally
known as HTTP-based services.

The term REST describes an architectural style that


takes advantage of the resource-based nature of
HTTP. It was first used in 2000 by Roy Fielding,
one of the authors of the HTTP, URI, and HTML
specifications. Fielding described in his doctoral
dissertation an architectural style that uses some
elements of HTTP and the World Wide Web for
creating scalable and extendable applications.

Today, REST is used to add important capabilities to a service. These capabilities include:
• Service discoverability

• State management

In this lesson, you will learn about these capabilities. For more information about REST, refer Roy Fielding’s
dissertation, Architectural Styles and the Design of Network-based Software Architectures.

Architectural Styles and the Design of Network-based Software Architectures by Roy Fielding
http://go.microsoft.com/fwlink/?LinkID=298760&clcid=0x409

Services that use the REST architectural style are also known as RESTful services. A simple way to
understand what makes a service RESTful is using a taxonomy called the Richardson Maturity Model, first
suggested by Leonard Richardson during his talk during the QCon San Francisco Conference in 2008.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-9

The Richardson Maturity Model


The Richardson Maturity Model describes four levels of maturity for services, starting with the least
RESTful level and advancing toward fully RESTful services:

• Level zero services. Use HTTP as a transport protocol by ignoring the capabilities of HTTP as an
application layer protocol. Level zero services use a single address, also known as an endpoint and a
single HTTP method, which is usually POST. SOAP services and other RPC-based protocols are
examples of level zero services.
• Level one services. Identify resources by using URIs. Each resource in the system has its own URI by
which the resource can be accessed.

• Level two services. Uses the different HTTP verbs to allow the user to manipulate the resources and
create a full API based on resources.

• Level three services. Although the first two services only emphasize the suitable use of HTTP
semantics, level three services introduce hypermedia, an extension of the term hypertext as a means
for a resource to describe their own state in addition to their relation to other resources.

For more information about the Richardson Maturity Model, refer to Leonard Richardson’s presentation
and notes.
Leonard Richardson’s QCon 2008 presentation and notes
https://aka.ms/moc-20487D-m3-pg1

Hypermedia
When the World Wide Web started, it strongly affected the way humans consume data. Alongside
abilities, such as remote access to data and the ability to search a global knowledge base, the World Wide
Web also introduced hypertext. Hypertext is a nonlinear format that enables readers to access data
related to a specific part of the text by using hyperlinks. The term hypermedia describes a logical
extension of the same concept. Hypermedia-based systems use hypermedia elements, known as
hypermedia controls, such as links and HTML forms, to enable resources to describe their current state
and other resources that are related to them.

Hypermedia and Discoverability


A simple example for resource discoverability can be found in Atom Syndication Format. At first, Atom
Syndication Format was developed as an alternative to RSS for publishing web feeds. Atom feeds are
resources with their own URIs that contain items. Feed items are resources themselves with their own URIs
published as links in the feed representation, which makes them discoverable to clients.

This feed describes different instances of a flight in the BlueYonder Companion app. The Hypermedia
control entry is used here to refer clients to different instances of a specific flight.

A simple Atom feed


HTTP/1.1 200 OK
Cache-Control: no-cache
Content-Type: application/atom+xml
Content-Length: 746
Connection: Close

<?xml version="1.0" encoding="utf-8"?>


<feed xmlns="http://www.w3.org/2005/Atom">
<title type="text">Blue Yonder flights</title>
<id>uuid:460f9be6-3503-43c5-8168-5cb86127b572;id=1</id>
<updated>2012-11-16T21:50:17Z</updated>
<entry xml:base="http://localhost:4392/Flights/BY002/1117">
<id>BY002</id>
MCT USE ONLY. STUDENT USE PROHIBITED
3-10 Creating and Consuming ASP.NET Core Web APIs

<title type="text">Flight BY002 November 17, 2012</title>


<updated>2012-11-16T21:50:17Z</updated>
</entry>
<entry xml:base="http://localhost:4392/Flights/BY002/1201">
<id>BY002</id>
<title type="text">Flight BY002 December 01, 2012</title>
<updated>2012-11-16T21:50:17Z</updated>
</entry>
<entry xml:base="http://localhost:4392/Flights/BY002/1202">
<id>BY002</id>
<title type="text">Flight BY002 December 02, 2012</title>
<updated>2012-11-16T21:50:17Z</updated>
</entry>
</feed>

Hypermedia and State Transfer


Another pattern supported by hypermedia is state transfer. To manage the state of resources, RESTful
services use hypermedia to describe what can be done with the resource when it returns its
representation. For example, if a resource representing a flight enables the user to book tickets, a
hypermedia control describing how you can do this should be present. As soon as the flight does not let
the user additionally book because of any number of reasons (it is fully booked, canceled, and so on), the
hypermedia control should not be returned in the resource’s representation.

This response represents a flight that enables booking in its current state.

A response with Hypermedia control for booking flights


HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Expires: -1
Server: Microsoft-IIS/8.0
X-AspNet-Version: 4.0.30319
X-SourceFiles: =?UTF-
8?B?QzpcU2VsYVxNT0NcMjA0ODdBXFNvdXJjZVxBbGxmaWxlc1xNb2QwM1xMYWJmaWxlc1xCbHVlWW9uZGVyLkNvb
XBhbmlvblxCbHVlWW9uZGVyLkNvbXBhbmlvbi5Ib3N0XGZsaWdodHM=?=
X-Powered-By: ASP.NET
Date: Wed, 05 Dec 2012 11:12:19 GMT
Content-Length: 312

{
"Source":{"Country":"Italy","City":"Rome"},
"Destination":{"Country":"France","City":"Paris"},
"Departure":"2014-02-01T08:30:00",
"Duration":"02:30:00",
"Price":387.0},
FlightNumber":"BY001",
"links":[
{
"rel": "booking",
"Link": "http://localhost/flights/by001/booking"
}
]
}

Hypermedia is what differentiates REST from HTTP-based services. It is a simple but powerful concept that
enables a range of capabilities and patterns including service versioning, aspect management, and more
which are beyond the scope of this course. Today, more and more formats and APIs are created by using
hypermedia.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-11

One of the media types supporting hypermedia is the Hypertext Application Language (HAL). The HAL
media type offers link-based hypermedia. For more information about HAL, refer the HAL format
specifications.

Hypertext Application Language (HAL)


http://go.microsoft.com/fwlink/?LinkID=298762&clcid=0x409

Media Types
HTTP was originally designed to transfer
hypertext. Hypertext is a nonlinear format that
contains references to other resources, some of
which are other hypertext resources. However,
some resources contain other formats such as
image files and videos, which required HTTP to
support the transfer of different types of message
formats. To support different formats, HTTP uses
Multipurpose Internet Mail Extensions (MIME)
types, also known as media types. MIME types
were originally designed for use in defining the
content of email messages sent over SMTP.

Media types are made of two parts, a type and a subtype, optionally followed by type-specific parameters.
For example, the type text indicates a human-readable text and can be followed by subtypes such as
HTML, which indicates HTML content and plain indicates a plain text payload.

Common text media types


text/html
text/plain

In addition, the text type gives a charset parameter, so that the following declaration is also valid.

The Charset parameter used in text media types


text/html; charset=UTF-8

In HTTP, media types are declared by using headers as part of a process that is known as content
negotiation. Content negotiation is not restricted to media type and includes support for language
negotiation, encoding, and more. The following section shows how content negotiation is used for
handling media types.

The Accept Header


When a client sends a request, it can send a list of requested media types, and in order of preference, it
can accept in the response.

This request message uses the Accept header to communicate to the server what media types it can
accept.

An HTTP request message starting content negotiation


GET http://localhost:4392/travelers/1 HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
Accept-Language: en-US,en;q=0.7,he;q=0.3
User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)
Accept-Encoding: gzip, deflate
MCT USE ONLY. STUDENT USE PROHIBITED
3-12 Creating and Consuming ASP.NET Core Web APIs

Host: localhost:4392
DNT: 1
Connection: Keep-Alive

Although the server should try to fulfill the request for content, this is not always possible. Be aware that
in the previous request, the type */* indicates that if text/html and application/xhtml+xml are not
available, the server should return whatever type it can.

The Content-Type Header


In HTTP, any message that contains an entity-body should declare the media type of the body by using
the Content-Type header.

This request message uses the Content-Type header to declare what media types it uses for the entity-
body.

An HTTP response message returning application/json representation of a traveler entity

HTTP/1.1 200 OK
Server: ASP.NET Development Server/11.0.0.0
Date: Sat, 17 Nov 2012 13:27:20 GMT
X-AspNet-Version: 4.0.30319
Cache-Control: no-cache
Pragma: no-cache
Expires: -1
Content-Type: application/json; charset=utf-8
Content-Length: 188
Connection: Close

{"TravelerId":1,"TravelerUserIdentity":"aaabbbccc","FirstName":"FirstName1","LastName":"L
astName1","MobilePhone":"555-555-5555","HomeAddress":"One microsoft
road","Passport":"AB123456789"}

Media types give the structuring of the HTTP messages. Content negotiation enables servers and clients to
set the expectation for what content they should expect during their HTTP transaction. Content
negotiation is not limited to media types. For example, content negotiation is used to negotiate content
compression by using the Accept-Encoding header, localization by using the Accept-Language header,
and more.

Content negotiation in the HTTP 1.1 Request For Comments (RFC 2616)
http://go.microsoft.com/fwlink/?LinkID=298763&clcid=0x409

Question: Why do you need different HTTP Verbs?


MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-13

Lesson 2
Creating an ASP.NET Core Web API
ASP.NET Core Web API is the first full-featured framework for developing HTTP-based services in.NET
Core. Using ASP.NET Core Web API gives developers reliable methods for creating, testing, and deploying
HTTP-based services. In this lesson, you will learn how to create ASP.NET Core Web API services and how
they are mapped to the different parts of HTTP. You will also learn how to interact directly with HTTP
messages.

Lesson Objectives
After you complete this lesson, you will be able to:

• Describe ASP.NET Core Web API and how it is used for creating HTTP-based services.
• Create routing rules.

• Create ASP.NET Core Web API controllers.

• Define action methods.


• Create and run an HTTP-based service by using ASP.NET Core Web API.

Introduction to ASP.NET Core Web API


HTTP has been around ever since the World Wide
Web was created in the early 1990s, but adoption
of it as an application protocol for developing
services took time. In the early years of the web,
SOAP was considered the application protocol of
choice by most developers. SOAP provided a
robust platform for developing RPC-style services.
With the appearance of internet-scale applications
and the growing popularity of Web 2.0, it became
clear that SOAP was not fit for such challenges
and HTTP received increasing attention.

HTTP in .NET Framework


For the better part of the first decade of its existence, .NET Framework did not have a first-class
framework for building HTTP services. At first, ASP.NET provided a platform for creating HTML-based
web-pages and ASP.NET web services, and later-on Windows Communication Foundation (WCF) provided
SOAP-based platforms. For these reasons, HTTP never received the attention it deserved.
The need for a comprehensive solution for developing HTTP services in .NET Framework justified creating
a new framework. Therefore, in October 2010, Microsoft announced the WCF Web API, which introduced
a new model and additional capabilities for developing HTTP-based services. These capabilities included:

• Better support for content negotiation and media types.

• APIs to control every aspect of the HTTP messages.

• Testability.

• Integration with other relevant frameworks like Entity Framework and Unity.
MCT USE ONLY. STUDENT USE PROHIBITED
3-14 Creating and Consuming ASP.NET Core Web APIs

The WCF Web API team released six preview versions until in February 2012, they were merged with the
ASP.NET team, forming the ASP.NET Web API.

HTTP in .NET Core


In June 2016 Microsoft released the first version of .NET Core and ASP.NET Core. .NET Core is a cross-
platform implementation of Common Language Runtime so you can run some .NET applications on
Windows Linux and Mac.

Therefore ASP.NET Core that runs on .NET Core is also a cross-platform and high-performance framework
for building modern backend applications such as web apps that run on cloud and on premise.

Creating a Web API Controller


ASP.NET Core Web API services are implemented
in classes called controllers, which derive from the
Microsoft.AspNetCore.Mvc.Controller class. As
soon as a request is routed to a controller based
on a URI, the Controller takes control in finding
and running the appropriate action.

The Controller also provides APIs for handling


HTTP requests, validating input parameters, and
interacting with the context of the operation. In
fact, a big part of the capabilities of ASP.NET Core
Web API described in this module and in Module
4, "Extending ASP.NET Core HTTP Services", are
exposed and managed by the Controller.

Defining Controllers
To create a controller, you must do the following:

• Create a class that derives from the Microsoft.AspNetCore.Mvc.Controller class.


• Name the class with the Controller suffix.

This code example shows how to define a controller.

The flights controller


public class FlightsController : Controller
{
}

The Responsibilities of the Controller


ASP.NET Core Web API controllers must derive from the Controller class. The reason for deriving from
the Controller class is that in addition to defining a logical unit of the service, the Controller class does
lots of other work. Among the responsibilities of the Controller class, you can find:

• Action Selection. The Controller class is responsible for calling the ActionSelector class that is
responsible to run the action method.

• Applying Filters. ASP.NET Core Web API filters let developers extend the request/response pipeline.
Before running an action method, the Controller class is in charge of applying and running the filters
in the correct order before and after running the action methods.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-15

Additional APIs in the Controller


The Controller class also exposes additional APIs. Most of them are based on the
Microsoft.AspNetCore.Mvc.ControllerContext class, which represents the context of the current HTTP
request. The request contains information such as the current route that is being used and the request
message.

The Controller class exposes ControllerContext by using the ControllerContext property. In addition,
the Controller class also provides some properties that expose specific data that is a part of the
ControllerContext property such as the Request property that provides access to the HttpRequest
representing the HTTP request for the operation. The HttpRequest class is discussed in-depth in Lesson 4,
Handling HTTP Requests and Responses, of this module.

Additional Reading: Filters are discussed in-depth throughout Module 4. Action and
Exception filters are discussed in Module 4, Lesson 2, Customizing Controllers and Actions.

Action Methods and HTTP Verbs


As soon as ASP.NET Core Web API chooses a controller, it can start to handle the next step of choosing
the method that will handle the request. The selection of the action is performed by the Controller class.
When choosing an action method, the Controller class gets an instance of a class implementing
Microsoft.AspNetCore.Mvc.Infrastructure.IActionSelector interface from the Configuration. The
default implementation is using the Microsoft.AspNetCore.Mvc.Internal.ActionSelector class.

The method selection can be done based on the requests the HTTP method has used and on the request-
URI. There are several techniques for mapping actions:

• Mapping to HTTP methods based on convention.

• Mapping to request-URIs based on the {action} placeholder in route templates.


In addition to matching the HTTP method or request-URI to the method name or attribute, ASP.NET Core
Web API takes the parameters that are passed to the method into consideration and makes sure that they
match.

Mapping by Action Name


In addition to the controller placeholder in routes, ASP.NET Core Web API also has a special placeholder
for the action. When the controller identifies an action in the route, it will first map the action to action
methods, which match the name of the action. Matching is performed for methods that fit one of the
following criteria:

Criteria An action matching the name Flights

The name of the method is the public IActionResult Flights()


same as the action name.
{
}

The name of the method public IActionResult GetFlights()


matches the action with the
prefix of a valid HTTP method {
name. }

The method has an action [ActionName("Flights")]


name that is defined by using
the [ActionName] attribute.
MCT USE ONLY. STUDENT USE PROHIBITED
3-16 Creating and Consuming ASP.NET Core Web APIs

Criteria An action matching the name Flights


public IActionResult AirTrips()
{
}

Mapping by HTTP Method


The ActionSelector also maps actions by HTTP methods. This can be done by using one of the following
techniques:

Technique
Syntax

Matching by using a prefix or public IActionResult GetFlights()


method name.
public IActionResult Get()

Matching by using the [AcceptVerbs("GET")]


[AcceptVerbs] attribute.
public IActionResult AirTrips()

[AcceptVerbs(AcceptVerbs.Get)]

public IActionResult AirTrips()

Matching by using specific [HttpGet]


implementation of
HttpMethodAttribute. public IActionResult Flights(int id)

[HttpDelete]
public IActionResult Flights(int id)

Note: This convention and the HttpVerb enum support only the GET, HEAD, PUT, POST,
OPTION, PATCH, and DELETE methods.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-17

Creating Routing Rules


One of the first challenges when developing
HTTP-based services is mapping HTTP requests to
the code being run by the server based on the
request-URI and HTTP-method. The process of
mapping the request URI to a class or a method is
called routing.

Routing Tables
ASP.NET Core uses the
Microsoft.AspNetCore.Routing.IRouter
interface to describe the different routes that were
configured before the initialization of the host. A
route contains a URI template and default values
for the template. ASP.NET Core uses routes to map HTTP requests based on their request-URI and HTTP
method to the correlating code in the server.

Defining Routes
ASP.NET Core Web API routes are defined in Startup.cs by using the MapRoute extension method as is
shown in the following code.
This example shows the configuration of a simple route based on the name of the controller.

Configuring a route by using the RouteBuilder class


public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
services.AddRouting();
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env)


{
var routeBuilder = new RouteBuilder(app);

routeBuilder.MapRoute(
name: "Default",
template: "{controller}/{action}/{id?}",
defaults: new { controller = "Home", action = "Index" }
);

var routes = routeBuilder.Build();


app.UseRouter(routes);
}

Routes Definition in Details


To understand routes, you must understand how the ASP.NET Core Web API services are implemented.
The ASP.NET Core Web API services are implemented in classes called controllers, each controller
exposing one or more public methods called actions. The hosting environment uses routing to deliver
HTTP requests to the actions designed to handle those requests.

The following headings discuss controllers and actions in-depth because understanding controllers and
actions is important to understanding routes.

How Controllers Are Mapped


The ASP.NET Core Web API services are implemented in classes called controllers. Controllers are
implemented by using two constraints:
MCT USE ONLY. STUDENT USE PROHIBITED
3-18 Creating and Consuming ASP.NET Core Web APIs

• They must derive from the Microsoft.AspNetCore.Mvc.Controller class.

• By convention, they must be named with the Controller suffix.

When ASP.NET Core Web API receives a request that matches the template in the route, it looks for a
controller that matches the value that was passed in the controller placeholder of the URI template by
name. For example, a URI with the following URI relative path, "api/flights/by001", will be evaluated
against the template defined in the earlier example ("api/{controller}/{id?}"). ASP.NET Core Web API will
look for a controller that is named FlightsController.

This controller maps when the flights value is passed as the value for the {controller} placeholder.

The flights controller


public class FlightsController : Controller
{
}

How Actions Are Mapped


Conventions play a big role in the ASP.NET Core ecosystem and ASP.NET Core Web API is no different.
When looking at the route template, you will notice that there is no placeholder for actions even though
action methods must be run to handle incoming requests. This is possible because of a convention that
maps methods based on their prefix to HTTP verbs.
This action method is chosen when sending a GET request by using the flights/by001 path.

An action definition
public class FlightsController : Controller
{
public IActionResult Get(string id)
{
// Place code here to return an HttpResponseMessage object
}
}

Note: This convention only supports the GET, HEAD, PUT, POST, OPTION, PATCH and
DELETE methods. However, actions also support attribute-based routing as described later in this
lesson.

How Parameters Are Mapped


In this example, a parameter called id was also mapped as a part of the absolute path of the URI. In
ASP.NET Core, parameters are matched as part of a process that is known as Parameter Binding. The
default behavior of Parameter Binding is to bind simple types from the URI and complex types from the
entity-body of the request.

For parameter bindings, simple types include all .NET primitives with the addition of DateTime, Decimal,
TimeSpan, String, and Guid.

For more information refer:


https://aka.ms/moc-20487D-m3-pg2
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-19

Demonstration: Creating Your First ASP.NET Core Web API


In this demonstration, you will create a new ASP.NET Core Web application project by using the Web API
template, view the code generated by Visual Studio Code, and apply changes to the actions and routing
templates.

Demonstration Steps
You will find the steps in the “Demonstration: Creating Your First ASP.NET Core Web API “ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD03_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
3-20 Creating and Consuming ASP.NET Core Web APIs

Lesson 3
Consuming ASP.NET Core Web APIs
As with any other application, the ASP.NET Core Web API services need a process to give them a runtime
environment. This runtime must accommodate code that potentially serves many clients. When
developing services, hosting environments provide most of the capabilities needed to service client
requests and maintain a quality of service. You will learn how to consume the service from various client
environments including HTML, JavaScript, and .NET Core.

Lesson Objectives
After you complete this lesson, you will be able to:

• Consume ASP.NET Core Web API by using browser-based applications.


• Consume ASP.NET Core Web API from .NET Core applications.

• Handle exceptions and retries.

Consuming Services from Browsers


When HTTP was built in the early 1990s, it was
made for a very specific kind of client: web
browsers running HTML. Before the creation of
JavaScript in 1995, HTML was using two of the
three HTTP methods in HTTP 1.0: GET and POST.
GET requests are usually invoked by entering a
URI in the address bar or in kinds of hypertext
references such as img and script tags.
For example, entering the
http://localhost:2300/api/flights/ URI generates
the following GET request.

A GET request invoked by a web browser


GET http://localhost:7086/Locations HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
Accept-Language: en-US,en;q=0.7,he;q=0.3
User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)
Accept-Encoding: gzip, deflate
Host: localhost:7086
DNT: 1
Connection: Keep-Alive

Another way to start HTTP requests from a browser is by using HTML forms. HTML forms are HTML
elements that create a form-like UI in the HTML document that lets the user insert and submit data to the
server. HTML forms contain sub-elements, called input elements, and each represents a piece of data both
in the UI and in the resulting HTTP message.

This HTML form lets users submit a new location to the server from a web browser, generating a POST
request.

An HTML form for submitting a new flight


<form name="newLocation" action="/locations/" method="post">
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-21

<input type="text" name="LocationId" /><br />


<input type="text" name="Country" /><br />
<input type="text" name="State" /><br />
<input type="text" name="City" /><br />
<input type="submit">
</form>

This HTTP message was generated by submitting the newLocation HTML form.

An HTML form generated POST request


POST http://localhost:7086/locations/ HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
Referer: http://localhost:7086/default.html
Accept-Language: en-US,en;q=0.7,he;q=0.3
User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)
Content-Type: application/x-www-form-urlencoded
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
Content-Length: 49
DNT: 1
Host: localhost:7086
Pragma: no-cache

LocationId=7&Country=Belgium&State=&City=Brussels

The most flexible mechanism to start HTTP from a browser environment is by using JavaScript. Using
JavaScript provides two main capabilities that are lacking in other browser-based techniques:
• Complete control over the HTTP requests (including HTTP method, headers, and body).

• Asynchronous JavaScript and XML (AJAX). Using AJAX, you can send requests from the client after the
browser completes loading the HTML. Based on the result of the calls, you can use JavaScript to
update parts of the HTML page.

Demonstration: Consuming Services by Using JavaScript


In this demonstration, you will show how to consume a service by using JavaScript.

Demonstration Steps
You will find the steps in the “Demonstration 1: Consuming Services by Using JavaScript“ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD03_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
3-22 Creating and Consuming ASP.NET Core Web APIs

Consuming Services from .NET Clients with HttpClient


ASP.NET Core Web API also provides a new client-
side API for consuming HTTP in .NET Framework
applications. The main class for this API is
System.Net.Http.HttpClient. This provides basic
functionality for sending requests and receiving
responses.

HttpClient keeps a consistent API with ASP.NET


Core Web API by using HttpRequestMessage
and HttpResponseMessage for handling HTTP
messages. The HttpClient API is a task-based
asynchronous API providing a simple model for
consuming HTTP asynchronously.

This code example uses the HttpClient to send a GET request, receive an HttpResponseMessage from
the server, and then read its content as a string.

Using the HttpClient GetAsyc method.


var client = new HttpClient
{
BaseAddress = new Uri("http://localhost:12534/")
};

HttpResponseMessage message = await client.GetAsync("destinations");


var res = message.Content.ReadAsStringAsync();
Console.WriteLine(res);

Although this code provides a simple asynchronous API, it is not common for the client to require string
representation of the data. A more useful approach is to obtain a deserialized object based on the entity
body.
To support serializing and de-serializing objects, HttpClient uses a set of extensions defined in
System.Net.Http.Formatting.dll that is a part of the Microsoft ASP.NET Web API Client Libraries
NuGet package. System.Net.Http.Formatting.dll adds the extension methods to the System.Net.Http
namespace so that no additional using directive is needed.

This code example uses the ReadAsAsync<T> extension method to deserialize the content of the HTTP
message into a list of destinations.

Using the ReadAsAsync<T> extension method


var client = new HttpClient
{
BaseAddress = new Uri("http://localhost:12534/")
};

HttpResponseMessage message = await client.GetAsync("destinations");


var destinations = await message.Content.ReadAsAsync<List<Destination>>();
Console.WriteLine(destinations.Count);
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-23

Exception Handling and Retries


Consuming HTTP services can fail for a lot of
reasons, such as network issues or timeout.
HttpClient has APIs to handle these exceptions
called HttpMessageHandler.

The HttpMessageHandler class provides a


pipeline API to handle HTTP messages, and
separate the logic of handling HTTP requests to
different classes; for example, a class for writing to
log and a class to provide a retry mechanism.

Declaring logging message handler


A logging message handler
public class LoggingHandler : DelegatingHandler
{
private readonly ILogger _logger;

public LoggingHandler(ILogger logger)


{
_logger = logger;
}

protected override async Task<HttpResponseMessage> SendAsync(


HttpRequestMessage request,
CancellationToken cancellationToken)
{
_logger.Trace($"Request: {request}");
try
{
// base.SendAsync calls the inner handler
HttpResponseMessage response = await base.SendAsync(request,
cancellationToken);
_logger.Trace($"Response: {response}");
return response;
}
catch (Exception ex)
{
_logger.Error($"Failed to get response: {ex}");
throw;
}
}
}

Declaring retry mechanism handler


A retry message handler
public class RetryHandler : DelegatingHandler
{
Func<Exception, int, bool> _condition;

public RetryHandler(Func<Exception, int, bool> condition)


{
_condition = condition;
}

protected override async Task<HttpResponseMessage> SendAsync(


HttpRequestMessage request,
CancellationToken cancellationToken)
{
MCT USE ONLY. STUDENT USE PROHIBITED
3-24 Creating and Consuming ASP.NET Core Web APIs

int retries = 0;
while (true)
{
try
{
// base.SendAsync calls the inner handler
HttpResponseMessageresponse = await base.SendAsync(request,
cancellationToken);

// Not something we can retry, return the response as is


return response;
}
catch (Exception ex) when (_condition(ex, retries))
{
retries++;
// Network error
// Wait a bit and try again later
await Task.Delay(2000, cancellationToken);
continue;
}
}
}
}

HttpClient can get HttpMessageHandler in the constructor and by deriving from DelegatingHandler,
we can concatenate the handlers and have a pipeline.

Using message handler


Use the retry message handler
RetryHandler retryHandler = new RetryHandler((ex, retries) => ex is SocketException ||
retries < 3);
retryHandler.InnerHandler = new HttpClientHandler();

HttpClient client = new HttpClient(retryHandler);


HttpResponseMessage res = await client.GetAsync("http://...");

Demonstration: Consuming Services by Using HttpClient


In this demonstration, you will learn how to consume HTTP services from .NET Framework applications by
using the HttpClient class. You will install the Microsoft AspNet WebApi Client to add an extension
method that deserializes the JSON response into .NET objects.

Demonstration Steps
You will find the steps in the “Consuming Services by Using HttpClient“ section on the following page:
https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD03_DEMO.md.
Question: What are the benefits of HttpClient that makes it more useful than
HttpWebRequest and WebClient?
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-25

Lab: Creating an ASP.NET Core Web API


Scenario
In this lab, you will create and use ASP.NET Core Web APIs.

Objectives
After you complete this lab, you will be able to:

• Create a Web API controller to expose APIs.

• Invoke the API through a browser.

• Use httpClient, create ConsoleApplication, and then connect to the server by using httpClient.

Lab Setup
Estimated Time: 30 Minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD03_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD03_LAK.md.

Exercise 1: Create a Controller Class


Scenario
Implement the booking service by using ASP.NET Core Web API. Start by creating a new ASP.NET Core
Web API controller, and implement CRUD functionality using the POST, GET and PUT HTTP methods.

Exercise 2: Use the API from a Browser


Scenario
In this exercise you will invoke the GET action method of the HotelBookingController class via the web
browser.

Exercise 3: Create a Client


Scenario
In this exercise, you will create a Console Application client which will use HttpClient to call the GET and
PUT action methods of the controller. Then, you will deserialize the response into objects.
MCT USE ONLY. STUDENT USE PROHIBITED
3-26 Creating and Consuming ASP.NET Core Web APIs

Lesson 4
Handling HTTP Requests and Responses
Creating an instance of a class and finding the method to run is not always enough. To provide a real
solution for HTTP-based services, ASP.NET Core Web API must provide additional functionality for
interacting with HTTP messages. This functionality includes mapping parts of the HTTP request to method
parameters in addition to a comprehensive API for processing and controlling HTTP messages. Using that
API, you can now easily interact with headers in the requests and response messages, control status codes,
and more.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe how parameter binding works in ASP.NET Core Web API.


• Use the HttpRequest class to handle incoming requests.

• Use the HttpResponse class to control the response of an action.


• Throw exceptions to control HTTP errors.

Binding Parameters to Request Message


After locating the controller and action method,
there is still one last task that ASP.NET Core Web
API must handle, which is mapping data from the
HTTP request to method parameters. In ASP.NET
Core Web API, this process is known as
parameter-binding.
HTTP message data can be passed in the
following ways:

• The message-URI. In HTTP, the absolute


path and query are used to pass simple values
that help identify the resource and influence
the representation.

• The Entity-body. In some HTTP messages, the message body passes data.

Note: Headers are also used to pass metadata and not as part of the business logic.
Headers data is not bound to methods parameters by default and is accessed by using the
HttpRequest class described later in this lesson.

By default, ASP.NET Core Web API differentiates simple and complex types. Simple types are mapped
from the URI and complex types are mapped from the entity-body of the request. For parameter
bindings, simple types include all .NET primitive types (int, char, bool, and so on) with the addition of
DateTime, Decimal, TimeSpan, String, and Guid.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-27

Accessing Request Headers


Invoking a method is an important aspect of
HTTP-based services. But HTTP provides a vast
functionality that requires analyzing the request
message in its entirety. For example, request
headers can provide important information,
including the version of the entity passed, user
credentials, cookie data, and the requested
response format. ASP.NET Core Web API uses the
Microsoft.AspNetCore.Http.HttpRequest class
to represent incoming HTTP request messages.

The HttpRequest class can be accessed by most


of the runtime components that compose the
request pipeline including message handlers, formatters, and filters. HttpRequest can also be accessed
inside the ASP.NET Core Web API controllers by using the Request property.
This code example uses the AcceptLanguage property of the HttpRequest class, which lists languages to
retrieve the value of the Accept-Language header returning a localized greeting message.

Retrieve the value of the Accept-Language header by using the Request property
public string Get(int id)
{
var lang = new RequestHeaders(Request.Headers).AcceptLanguage;
var bestLang = (from l in lang
orderby l.Quality descending
select l.Value.Value).FirstOrDefault();

switch (bestLang)
{
case "en":
return "Hello";
case "da":
return "Hej";
}

return string.Empty;
}

Creating Response Messages


Action methods can return both simple and
complex types that are serialized to a format
based on the Accept header. Although ASP.NET
Core Web API can handle the content negotiation
and serialization, it is sometimes required to
handle other aspects of the HTTP response
message (for example, returning a status code
other than 200 or adding headers).
MCT USE ONLY. STUDENT USE PROHIBITED
3-28 Creating and Consuming ASP.NET Core Web APIs

The Microsoft.AspNetCore.Mvc.IActionResult interface enables programmers to define every aspect of


the HTTP response message the action returns. To control the HTTP response, you must create an action
with IActionResult as its return type. To create a result, use one of the methods declared in
Microsoft.AspNetCore.Mvc.ControllerBase. For example, BadRequest to return HTTP status code 400
or Content to return HTTP status 200 and string as a result.

This code example creates a new flight reservation and returns an HTTP message that has two important
characteristics: a 201 created status and a Location header with the URI of the newly created resource.

Using the HttpResponseMessage class to control the HTTP response


[HttpPost]
public IActionResult Post(Reservation reservation)
{
Reservations.Add(reservation);
Reservations.Save();

return Created(new Uri(Request.RequestUri, reservation.ConfirmationNumber.ToString(),


reservation);
}

For additional ways to return values from actions, refer:


https://aka.ms/moc-20487D-m3-pg3

Throwing Exceptions with the HttpResponseException Class


In HTTP, errors are communicated by using two
mechanisms:

• HTTP status-codes. Provides a numeric


representation of the result of the request to
the server. Status codes provide an
application readable representation of the
result of a request.
• Entity-body. For most status codes, HTTP
accepts an entity body to provide clients with
details about the error that occurred.

Although both aspects of HTTP errors can be


accessed by using IActionResult interface, when you deal with more complex scenarios, returning
different results can create a complex code base. Modern programming languages use exceptions to
provide simple control flow when an error occurs. ASP.NET Core Web API provides middleware to handle
exceptions and return proper HTTP response.

To handle exceptions, you must create a middleware and set the status code, headers, and content that
you want the response to have.

This code example shows how to create a middleware to handle the exception and return a 500 internal
server error response.

Throwing an HttpResponseException
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-29

{
app.UseExceptionHandler(
options =>
{
options.Run(
async context =>
{
context.Response.StatusCode = (int)HttpStatusCode.InternalServerError;
context.Response.ContentType = "text/html";
var ex = context.Features.Get<IExceptionHandlerFeature>();
if (ex != null)
{
var err = $"<h1>Error: {ex.Error.Message}</h1>{ex.Error.StackTrace
}";
await context.Response.WriteAsync(err).ConfigureAwait(false);
}
});
});
}

Demonstration: Throwing Exceptions


In this demonstration, you will learn how to handle exceptions in ASP.NET Core Web API. You will learn
how to use the HttpResponseMessage class to control the status-code of the HTTP response message,
and how to use the HttpResponseException class to provide better control flow if there is an error.

Demonstration Steps
You will find the steps in the “Throwing Exceptions“ section on the following page:
https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD03_DEMO.md.

Question: In which case should you use HttpResponseException?


MCT USE ONLY. STUDENT USE PROHIBITED
3-30 Creating and Consuming ASP.NET Core Web APIs

Lesson 5
Automatically Generating HTTP Requests and Responses
In the modern age, very few applications are islands. A typical application interacts with tens or hundreds
of internal and external web services, a trend that has further increased with the advent of microservice
architectures. Mashing up undocumented APIs together, or having to understand each vendor’s
documentation in isolation, is an extremely daunting and complicated task. The OpenAPI specification
(formerly the Swagger specification) is a vendor-agnostic format for defining HTTP APIs, which can be
used for generating documentation, HTTP clients, HTTP servers, and mock service implementations.

In this lesson, we will explore the OpenAPI specification and how it can be used to design a modern HTTP
API in a language-independent manner. We will use the Swagger web-based tools for designing and
testing the API and will generate a C# client that can be used by our web services to interact with the API
we just designed, or with a third-party API that uses the OpenAPI specification.

Lesson Objectives
After completing this lesson, students will be able to:
• Design an HTTP API with the OpenAPI specification.

• Use Swagger Editor to construct and test HTTP requests.


• Generate C# clients for HTTP services by using AutoRest.

The OpenAPI Specification


Ten years ago, if you wanted to use a third-party
HTTP API, or even an HTTP API from another team
in your organization, you would probably have to
read their documentation to understand the
request and response format you could expect.
The documentation, in turn, would often be
incomplete, missing important details such as
content types or HTTP status codes, and lacking
important updates that were made to the service
implementation but not to the (now stale)
documentation. The result was fragile, unreliable
service integrations, which often broke at the first
change of implementation or platform to any of the dependent services.
The OpenAPI specification was created with the goal of standardizing how HTTP (REST) APIs are
described, and it an open specification derived from the Swagger API specification by SmartBear Software.
Creating a standard for describing APIs that is language-agnostic, vendor-independent, and portable
across multiple platforms makes it possible for services to interoperate and connect without being aware
of each other’s implementation details. Furthermore, having a standard format for describing APIs means
you can automatically generate client code in any supported language for interacting with an HTTP API,
regardless of its implementation language or location -- similar to coding against a well-specified C#
interface. Likewise, you can use the standard format to generate skeleton server code in any supported
language and framework, and fill in only the implementation details -- similar to implementing a well-
specified C# interface.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-31

For the full formal details of the OpenAPI specification, see the OpenAPI-Specification GitHub
repository
https://aka.ms/moc-20487D-m3-pg4

The OpenAPI specification provides a standard format for describing an HTTP API. A specification is a
JSON or YAML text file containing numerous sections for describing the API endpoints, parameters,
request bodies, response bodies, status codes, examples, and more. The following are the most common
components you will encounter in OpenAPI specifications:
• General information. Contain the OpenAPI version; the service name, description, and version; the
base URLs for the service.

• Paths. Contain the API endpoints of your service, including relative URL parts, HTTP verbs, and
descriptions.

• Responses. Contain the possible HTTP status codes and response bodies returned by your service,
including their media type.
• Parameters. Contain the variable part of the accessed endpoint, and can be provided in the URL itself
(e.g. /flights/BY001), the query string (e.g. /flights/byId?id=BY001), or the request body.

• Reusable schemas. Contain descriptions of data models received or returned by the API, or
individual parameter, request or response descriptions.

Note: YAML (YAML Ain’t Markup Language) is a text serialization language similar to JSON,
which tries to do away with text elements that make the document harder for humans to parse
and understand. Compared to JSON, YAML is simpler to read because it uses indentation for
nesting, and a simplified format for nested objects, arrays, and strings.

This lesson uses the OpenAPI 3.0 specification, which has numerous useful features and
simplifications to the OpenAPI standard. To learn more about the new features and
differences between OpenAPI 2.0 and OpenAPI 3.0, see:
https://aka.ms/moc-20487D-m3-pg5

The following OpenAPI specification describes an API with a single path that you can access by making a
GET request, which returns a JSON document with a single string value:

OpenAPI specification example


openapi: 3.0.0
servers:
- url: 'https://example.com/hello-world'
info:
version: 1.0.0
title: Hello World
description: Hello World API
paths:
/hello:
get:
description: Returns a message
responses:
'200':
description: Successful response
content:
application/json:
MCT USE ONLY. STUDENT USE PROHIBITED
3-32 Creating and Consuming ASP.NET Core Web APIs

schema:
type: object
properties:
message:
type: string
example:
message: Hello, World

The servers object in the preceding document is an array of URLs where the service is accessible. It is
followed by an info object that contains the service version, title, and description. Next is the paths
object, which contains a single endpoint: /hello, which expects a GET request with no parameters. The
only expected response is under the responses object, and it should have an HTTP status code 200 (OK),
have the application/json media type, and a simple schema with a single string property titled message.
There is also an example provided so that API users know what to expect (providing an example also
makes automatic server mocking possible).

If this service was running on the specified URL and conforming to the API above, we could make an HTTP
request to it using PowerShell or cURL, and receive a reply, as follows:

Calling a service by using cURL


$ curl -X GET "https://example.com/hello-world/hello" \Soft return character -H
"accept: application/json"
{
"message": "Hello, World"
}

The following example adds a query parameter and a more complex response schema to the OpenAPI
specification discussed above -- the added parts are highlighted in bold:

OpenAPI specification example with query parameters


openapi: 3.0.0
servers:
- url: 'https://example.com/hello-world'
info:
version: 1.0.0-oas3
title: Hello World
description: Hello World API
paths:
/hello:
get:
description: Returns a message
responses:
'200':
description: Successful response
content:
application/json:
schema:
type: object
properties:
message:
type: string
example:
message: Hello, World
/echo:
get:
description: Echoes back a message
parameters:
- in: query
name: username
schema:
type: string
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-33

description: The user's name


- in: query
name: count
schema:
type: integer
description: The number of repetitions
responses:
'200':
description: Successful response
content:
application/json:
schema:
type: object
properties:
message:
type: string
username:
type: string
count:
type: integer
example:
message: Hello, Dave!
username: Dave
count: 1

In the above API, the /echo endpoint expects two query string parameters titled username and count. The
response schema is slightly more complex and consists of an object with three properties.
If this service was running on the specified URL and conforming to the API above, we could make an HTTP
request to it using PowerShell or cURL, and receive a reply, as follows:

Calling a service by using cURL


$ curl -X GET \
"https://example.com/hello-world/echo?username=Dave&count=1" \
-H "accept: application/json"
{
"message": "Hello, Dave!",
"username": "Dave",
"count": 1
}

An important concern when building real-world OpenAPI specifications is reusing some components. For
example, you can imagine that 404 (Not Found) responses will be quite similar in many cases and to
specify their details in every path can be redundant. Likewise, HTTP request bodies and response bodies
will often have reusable, shared objects -- the data model objects for your service. To address the need for
reusable objects, the OpenAPI specification has a components section, which can contain reusable
definitions for parameters, responses, and object schemas.

The following OpenAPI document specifies a small part of the Blue Yonder Flight Reservations API, with
multiple operations and shared schemas

OpenAPI specification example of the Blue Yonder Flight Reservations API


openapi: 3.0.0
info:
version: 1.0.0
title: blueyonder-flights
description: Blue Yonder Airlines flight reservations API
contact:
name: Blue Yonder Airlines
url: 'http://blueyonder.com'
email: contact@blueyonder.com
license:
MCT USE ONLY. STUDENT USE PROHIBITED
3-34 Creating and Consuming ASP.NET Core Web APIs

name: MIT
url: 'http://opensource.org/licenses/MIT'
servers:
- url: https://blueyonder.com/flights-api
paths:
/flights:
get:
description: Returns a list of all flights
responses:
'200':
description: Successfully returned flights
content:
application/json:
schema:
type: array
items:
type: object
properties:
airline:
type: string
flightnum:
type: integer
example:
- airline: Blue Yonder
flightnum: 97
- airline: Blue Yonder
flightnum: 103
/flights/{flightId}:
get:
description: Returns flight information for a flight
parameters:
- name: flightId
in: path
required: true
schema:
type: string
example: BY97
responses:
'200':
description: Successfully returned flight
content:
application/json:
schema:
$ref: '#/components/schemas/Flight'
example:
airline: Blue Yonder
source: Paris
destination: London
departureTime: '21 Mar 2018 08:30:00'
number: 97
'404':
description: No such flight found
/reservations:
post:
description: Creates a new flight reservation
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/Reservation'
example:
airline: Blue Yonder
flightNumber: 97
departureTime: '21 Mar 2018 08:30:00'
passengerName: David Smith
responses:
'201':
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-35

description: Successfully booked a flight


content:
application/json:
schema:
type: object
required:
- confirmation
properties:
confirmation:
type: string
example:
confirmation: AB78FG
'400':
description: Error booking the flight
content:
application/json:
schema:
type: object
properties:
reason:
type: string
example:
reason: No seats available on this flight
'404':
description: No such flight found

components:
schemas:
Flight:
type: object
required:
- airline
- source
- destination
- departureTime
- number
properties:
airline:
type: string
source:
type: string
destination:
type: string
departure:
type: string
format: datetime
number:
type: integer
Reservation:
type: object
required:
- airline
- flightNumber
- departureTime
- passengerName
properties:
airline:
type: string
flightNumber:
type: integer
departureTime:
type: string
format: datetime
passengerName:
type: string
MCT USE ONLY. STUDENT USE PROHIBITED
3-36 Creating and Consuming ASP.NET Core Web APIs

In the preceding example, pay attention to the items marked in bold. In the components section, we
define the schemas for two object types -- Flight and Reservation. In the paths section, we include
references to these schemas using the special $ref keyword. Also, note the use of the required keyword
to specify which object properties are required (for both request parameters and response content).

To learn more about the OpenAPI specification and how to construct specifications for more
complex services, refer to the tutorial at:
https://aka.ms/moc-20487D-m3-pg6

Constructing HTTP Requests with Swagger


Swagger Hub (by SmartBear Software) is an online
platform for authoring OpenAPI specifications,
testing them by making HTTP requests to mock
services, generating client and server code from
the specifications, and even producing automatic
documentation from the specification sections
and examples. Swagger also makes available a set
of standalone open source tools that you can use
to edit OpenAPI specifications (Swagger Editor),
generate code (Swagger Codegen), and explore
automatic documentation (Swagger UI).

The following screenshot shows the OpenAPI


authoring experience in the stand-alone Swagger Editor, deployed into a Docker container:

The following screenshot shows the expanded /flights/{flightId} API, which includes the expected
parameter type and the possible responses:
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-37

After you define your API on the left, you can use the UI on the right to test it, right from the editor. If the
API requires parameters or a request body, you can enter them as well. Finally, the editor makes an HTTP
request on your behalf and displays the results immediately.
The following screenshot shows the test UI, where you can specify the flight ID and then execute the
request:
MCT USE ONLY. STUDENT USE PROHIBITED
3-38 Creating and Consuming ASP.NET Core Web APIs

The following screenshot shows the request that was run and the response that was returned:

In the preceding screenshot, the request is sent to the automatic mock server provided by Swagger Hub
(https://virtserver.swaggerhub.com/…). The mock server makes it very easy to test your API definition
before you have any server implementation available for testing.

To learn more about the Swagger Hub automatic mock server integration, see:
https://aka.ms/moc-20487D-m3-pg7

Demonstration: Testing HTTP requests with Swagger


In this demonstration, you will learn how to install the Swashbuckle Nuget package and review and test
the API using swagger

Demonstration Steps
You will find the steps in the “Testing HTTP requests with Swagger“ section on the following page:
https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD03_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-39

Generating C# HTTP Clients by Using AutoRest


Our earlier discussion of OpenAPI and Swagger
has been completely language-agnostic. You can
use our OpenAPI specification to generate client
and server code in a variety of languages, by using
the Swagger Codegen tool or other available
open source alternatives. AutoRest is an open
source code generation tool from the Azure
organization, which can generate client libraries
from OpenAPI documents. It is a Node.js
application, which means you can use it on a
variety of operating systems, including Windows,
Linux, and macOS. The supported output
languages include C#, Node.js, Python, Java, Ruby, and Go.

You can find the AutoRest open source project and its documentation on GitHub:
https://aka.ms/moc-20487D-m3-pg8
Note: At the time of writing, AutoRest has full support for OpenAPI 2.0, but it doesn’t have
support for OpenAPI 3.0. If you plan to use AutoRest, make sure to provide an OpenAPI 2.0
document to the tool.

The following command installs AutoRest on your system, provided you have a working Node.js
installation (v7.10.0 or later is required at the time of writing):

Installing AutoRest by using npm


npm install -g autorest

To generate a C# client using AutoRest, you provide it a configuration file in the Markdown format. The
configuration file contains the location of your OpenAPI document (JSON or YAML), and any additional
documentation, such that the configuration file can be used as a standalone entry point to building your
service API. AutoRest then generates the client code required to access the service. Note that AutoRest
automatically pulls the code generators you specify (e.g. the C# code generator when specifying --
csharp), so you don’t need to install all of them ahead of time.

The following is a minimal AutoRest configuration file, which specifies the location of an OpenAPI
document and an output directory for the generated client:

An AutoRest configuration file


# BlueYonder Hotels API

> see https://aka.ms/autorest

```yaml
input-file: hotels_1.0.0_swagger.yaml

csharp:
namespace: BlueYonder.Hotels
output-folder: blueyonder-hotels
```

Note: The > see comment in the Markdown configuration file is required by the AutoRest
tool. It will throw an exception if it is not present.
MCT USE ONLY. STUDENT USE PROHIBITED
3-40 Creating and Consuming ASP.NET Core Web APIs

The following is the OpenAPI definition in YAML format provided to the AutoRest tool:

OpenAPI definition in YAML format


swagger: '2.0'
info:
version: '1.0.0'
title: 'Blue Yonder Hotel Reservations'
description: 'Blue Yonder hotel reservations service'
paths:
/hotels:
get:
description: Returns a list of hotels
operationId: getHotels
responses:
200:
description: Successfully returned a list of hotels
schema:
type: array
items:
type: object
properties:
id:
type: string
name:
type: string
address:
type: string
starting_price:
type: number
/hotels/{hotel}:
get:
description: Returns a specific hotel's details
operationId: getHotelById
parameters:
- name: hotel
in: path
type: string
required: true
description: The hotel ID
responses:
404:
description: No such hotel found
200:
description: Successfully returned the hotel details
schema:
$ref: '#/definitions/Hotel'
definitions:
Hotel:
type: object
properties:
id:
type: string
name:
type: string
address:
type: string
starting_price:
type: number
maximum_price:
type: number
available_rooms:
type: integer

host: virtserver.swaggerhub.com
basePath: /xoreax/hotels/1.0.0
schemes:
- https
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-41

Note: The operationId attribute attached to each operation is required by the AutoRest
tool and is used for the method names in the generated code. If it is not present, the tool will
throw an exception.

The output directory contains an interface for the service and a service proxy that implements the
interface. Additionally, each request, response, and schema object that is non-trivial gets its own class. For
the above example, the following files were generated:

• Models/Hotel.cs. The Hotel class.

• Models/GetHotelsOKResponseItem.cs. An auto-generated class for the individual items in the


/hotels operation’s response.

• IBlueYonderHotelReservations.cs. A C# interface with the service methods.

• BlueYonderHotelReservations.cs. A proxy class implementing the interface.

• BlueYonderHotelReservationsExtensions.cs. The Helper methods.

The following code is the IBlueYonderHotelReservations interface generated by AutoRest:

The generated interface by AutoRest


// <auto-generated>
// Code generated by Microsoft (R) AutoRest Code Generator.
// Changes may cause incorrect behavior and will be lost if the code is
// regenerated.
// </auto-generated>

namespace BlueYonder.Hotels
{
using Microsoft.Rest;
using Models;
using Newtonsoft.Json;
using System.Collections;
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;

/// <summary>
/// Blue Yonder hotel reservations service
/// </summary>
public partial interface IBlueYonderHotelReservations : System.IDisposable
{
/// <summary>
/// The base URI of the service.
/// </summary>
System.Uri BaseUri { get; set; }

/// <summary>
/// Gets or sets json serialization settings.
/// </summary>
JsonSerializerSettings SerializationSettings { get; }

/// <summary>
/// Gets or sets json deserialization settings.
/// </summary>
JsonSerializerSettings DeserializationSettings { get; }

/// <summary>
/// Returns a list of hotels
/// </summary>
MCT USE ONLY. STUDENT USE PROHIBITED
3-42 Creating and Consuming ASP.NET Core Web APIs

/// <param name='customHeaders'>


/// The headers that will be added to request.
/// </param>
/// <param name='cancellationToken'>
/// The cancellation token.
/// </param>
Task<HttpOperationResponse<IList<GetHotelsOKResponseItem>>>
GetHotelsWithHttpMessagesAsync(Dictionary<string, List<string>> customHeaders = null,
CancellationToken cancellationToken = default(CancellationToken));

/// <summary>
/// Returns a specific hotel's details
/// </summary>
/// <param name='hotel'>
/// The hotel ID
/// </param>
/// <param name='customHeaders'>
/// The headers that will be added to request.
/// </param>
/// <param name='cancellationToken'>
/// The cancellation token.
/// </param>
Task<HttpOperationResponse<Hotel>> GetHotelByIdWithHttpMessagesAsync(string
hotel, Dictionary<string, List<string>> customHeaders = null, CancellationToken
cancellationToken = default(CancellationToken));

}
}

To use the generated client in your application, add the files to your project, and then add the
Microsoft.Rest.ClientRuntime NuGet package. Then, create an instance of the service proxy class and
use it directly.
The following example shows how to use the generated client in your C# application:

Using the generated client


var hotelsClient = new BlueYonderHotelReservations();
IList<GetHotelsOKResponseItem> hotels = await hotelsClient.GetHotelsAsync();
Hotel hotel = await hotelsClient.GetHotelById(hotels[0].Id);

For more information about AutoRest configuration files, refer:

https://aka.ms/moc-20487D-m3-pg9

Demonstration: Generating C# HTTP Clients by Using AutoRest


In this demonstration, you will learn how to use AutoRest to create a C# client for your service and
integrate the generated code into a C# client application then use it to test the service.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 3-43

Demonstration Steps
You will find the steps in the “Generating C# HTTP Clients by Using AutoRest“ section on the following
page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD03_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
3-44 Creating and Consuming ASP.NET Core Web APIs

Module Review and Takeaways


In this module, you learned how HTTP can be used for creating services and how to use ASP.NET Core
Web API to create HTTP-based services. You have also learned how to consume ASP.NET Core Web API
services from the client by using the HttpClient class. You also learned how to apply best practices when
you develop HTTP services by using ASP.NET Core Web API.

Best Practices
• Model your services to describe resources and not functions.

• Use the IActionResult to return a valid HTTP response message.


• When handling errors, use middleware to avoid complex code.

• Use OpenAPI/Swagger to automatically generate HTTP requests and responses

Review Question
Question: What are ASP.NET Core Web API controllers used for?
MCT USE ONLY. STUDENT USE PROHIBITED
4-1

Module 4
Extending ASP.NET Core HTTP Services
Contents:
Module Overview 4-1

Lesson 1: The ASP.NET Core Request Pipeline 4-2

Lesson 2: Customizing Controllers and Actions 4-7


Lab: Customizing the ASP.NET Core Pipeline 4-13

Lesson 3: Injecting Dependencies into Controllers 4-14


Module Review and Takeaways 4-17

Module Overview
ASP.NET Core Web API provides a complete solution for building HTTP services, but services often have
various needs and dependencies. In many cases, you will need to extend or customize the way ASP.NET
Core Web API executes your service. You might need to extend or customize ASP.NET Web API for
handling needs such as applying error handling and logging, integration with other components of your
application and supporting other standards that are available in the HTTP world.

Understanding the way ASP.NET Core Web API works is important when you extend ASP.NET Core Web
API. The division of responsibilities between components and the order of execution are important when
intervening with the way ASP.NET Core Web API executes.

Finally, with ASP.NET Core Web API, you can also extend the way you interact with other parts of your
system. With the dependency resolver mechanism, you can control how instances of your service are
created, giving you complete control on managing dependencies of the services.

Objectives
After completing this module, students will be able to:
• Extend the ASP.NET Web API request and response pipeline.

• Customize Controllers and Actions

• Inject dependencies into ASP.NET Web API controllers.


MCT USE ONLY. STUDENT USE PROHIBITED
4-2 Extending ASP.NET Core HTTP Services

Lesson 1
The ASP.NET Core Request Pipeline
In this lesson we will learn about the Web API processing architecture and the flow of requests and
responses in it. We will focus on the role of middleware in the pipeline and learn the benefits and the
ways to customize middleware.

Lesson Objectives
After completing this lesson, students will be able to:

• Describe the ASP.NET Core Web API processing architecture.

• Describe the ASP.NET Core Web API middleware concept.

• Describe custom middleware.

ASP.NET processing architecture


To build HTTP-based services, you need to handle
two main workflows:

• Receiving HTTP request messages from clients


and creating method invocations based on
those messages.

• Returning HTTP response messages to clients


based on the result of the methods invoked.
To handle these two tasks, ASP.NET Core Web API
uses a processing architecture that spans from the
underlying communication infrastructure to the
action method, handling every aspect of both
HTTP messages and method invocation. Understanding this architecture can help you in extending
ASP.NET Core Web API and developing better services.

Architecture Overview
The ASP.NET Core Web API processing architecture is made of three layers:

• Hosting
• Message handlers

• Controllers

Hosting
The hosting layer is in charge of interacting with the underlying communication infrastructure, creating an
HttpRequest object from the request and sending the object down through the message handling
pipeline to the message handler layer. The hosting layer is also in charge of converting HttpResponse
objects received from the message handlers to HTTP messages sent through the underlying
communication infrastructure.

ASP.NET Core Web API has three implementations for the hosting layer:
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 4-3

• Kestrel. Default hosting is implemented in the Microsoft.AspNetCore.Server.Kestrel package.


Kestrel is a cross-platform web server that can be used by itself or with a reverse proxy server like IIS
on Windows or Nginx on Linux.

• ASP.NET Core Module. This module works with kestrel and is a native IIS module on Windows.
• HTTP.sys. This web server for ASP.NET Core is only for Windows. It has some features that are missing
in the kestrel web server. The missing features include Windows authentication and WebSockets.

For more information about hosting, go to the following URL.


https://aka.ms/moc-20487D-m4-pg1

Middleware
Middleware are methods that are chained to each other to form a pipeline. Every middleware receives an
HttpContext object and performs some processing on the message before passing it to the next
middleware in the pipeline. This allows ASP.NET Core Web API to separate the concerns for different
processing that must be applied to every message and provides an extensibility point for developers.
Middleware are covered later in this lesson.

Controllers
The final layer in ASP.NET Core Web API is executed by the controllers themselves. When the
OnActionExecutionAsync method of a controller is called, it starts a process that should result in the
execution of an action method processing the request and returning a response. The process is made out
of the following steps:
• Action Selection. The first step for executing an action method is identifying which action should be
executed. Action selection is covered in Module 3, “Creating and Consuming ASP.NET Core Web
APIs”, Lesson 2, “Creating an ASP.NET Core Web API” in Course 20487.

• Creating the Filters Pipeline. Each action can have a set of components called filters associated with it.
Similar to message handlers, filters also provide a way to create a pipeline of processing units but only
for an action and not for the entire host. ASP.NET Core Web API has three types of filters executed in
the following order:

o Authorization filters

o Resource filters

o Action filters (are covered later in this lesson)


o Exception filters

o Result filters
The filters pipeline also contains two other components:

o ModelBinders. The ModelBinders class performs the process of parameter binding and is
executed after the resource filters. Parameter binding is covered in Module 3, “Creating an
ASP.NET Core Web APIs”, Lesson 2, “Creating an ASP.NET Core Web API” in Course 20487.
o ControllerActionInvoker. The Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker
class is in charge of invoking the action method and converts the result to ResponseMessage (if
needed).

For more information about filters, go to the following URL.


https://aka.ms/moc-20487D-m4-pg2
MCT USE ONLY. STUDENT USE PROHIBITED
4-4 Extending ASP.NET Core HTTP Services

ASP.NET middleware
A pipeline of message processing components is a
common pattern in many frameworks that deal
with messages. ASP.NET modules, Connect
middleware (in Node.js) and many other
frameworks all provide components that receive a
request, return a response, and provide
extensibility to a message processing pipeline.

ASP.NET Web API middleware are methods that


are chained to each other to form a pipeline.
Every middleware receives an HttpContext object
and performs some processing on the message
before passing it to the next middleware in the
pipeline.
There are three different ways to create a middleware:

• Use. You allow middleware to run code before and after the next middleware and even decide to
break and not run the next middleware.
• Run. You create a middleware that runs at the end of the pipeline.

• Map. You allow different pipelines for different routes.


The following code shows a simple middleware created by calling the Use extension method on the
IApplicationBuilder interface and provides a simple lambda function.

A simple middleware Implementation


public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{

app.Use(async (context, next) =>


{
// Before action executed.
await next.Invoke();
// After action executed.
});

…}

For more information about middleware, go to the following URL.


https://aka.ms/moc-20487D-m4-pg3
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 4-5

Creating a custom middleware


Although you can create simple middleware as a
lambda function, more complex middleware can
be created as a simple class that can use a
dependency injection. This is explained in Lesson
3, "Injecting Dependencies into Controllers." Also,
it will be simpler to reuse the same middleware in
different applications or routes.

Creating custom middleware by providing the


constructor with the RequestDelegate parameter
and the InvokeAsync method with HttpContext.

Creating custom middleware


public class CustomMiddleware
{
private readonly RequestDelegate next;

public CustomMiddleware(RequestDelegate next)


{
this.next = next;
}

public async Task InvokeAsync(HttpContext context)


{
// before next middleware
await next(context);
// after next middleware
}
}

The standard way to expose middleware is with the extension method on the IAppBuilder interface.

Create an extension method to expose middleware.

Exposing middleware with extension method


public static class CustomMiddlewareExtensions
{
public static IApplicationBuilder UseCustomMiddleware(
this IApplicationBuilder builder)
{
return builder.UseMiddleware<CustomMiddleware>();
}
}

Use the extension method in the Configure method just like simple middleware.

Using the CustomMiddleware


public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{

app.UseCustomMiddleware();


}
MCT USE ONLY. STUDENT USE PROHIBITED
4-6 Extending ASP.NET Core HTTP Services

Demonstration: Creating a Middleware for Custom Error Handling


In this demonstration, you will see how to create a new ASP.NET Core middleware to handle exceptions.

Demonstration Steps
You will find the steps in the “Demonstration: Creating a Middleware for Custom Error Handling“ section
on the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/blob/master/Instructions/20487D_MOD04_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 4-7

Lesson 2
Customizing Controllers and Actions
In this lesson we will learn about asynchronous actions and how they impact the overall performance of
code execution. We will learn and see examples of usage of Filters which provide a mechanism to extend
the pipeline for specific actions or controllers, similarly to middleware. We will learn how to validate
incoming data by using ASP.NET model validators and finally, we will learn how to negotiate with various
types of media, such as XML, JSON and binary by using ASP.NET Core media type formatter.

Lesson Objectives
After completing this lesson, students will be able to:

• Describe asynchronous actions.


• Explain how filters work.

• Describe model validators.


• Describe media type formatter.

Asynchronous actions
One of the most powerful capabilities of ASP.NET
Core Web API is the support for building
asynchronous actions. Asynchronous actions
provide a simple-to-use mechanism that you can
use to improve the scalability of services when
performing I/O bound operations.

I/O bound operations


I/O bound operations are common in services.
These include operations such as database access,
file access, and remote service calls. Most I/O
bound APIs, from the low-level
System.IO.Stream to more high-level APIs, such
as ADO.NET and HttpClient, provide both synchronous and asynchronous operations.

Synchronous I/O bound operations


Synchronous operations provide a simple model to access I/O devices. For example, when accessing a
network card using the WebRequest API.
The following code shows a synchronous call using the WebRequest API.

Synchronous call using WebRequest API


var client = WebRequest.Create("http://server-2/");
var response = client.GetResponse();
var stream = response.GetResponseStream();
var reader = new StreamReader(stream);
var result = reader.ReadToEnd();

The preceding code is relatively easy to follow. However, there is one line to which you should pay close
attention. When calling the Client.GetResponse method, the executing thread is blocked while waiting
MCT USE ONLY. STUDENT USE PROHIBITED
4-8 Extending ASP.NET Core HTTP Services

for the response. This blocking behavior is unnecessary, considering the fact that the most of
Client.GetResponse method execution is carried out by the network card and the remote server.

Asynchronous I/O Bound Operations


Asynchronous I/O bound APIs are designed to avoid the redundant behavior of their synchronous
equivalents. For example, the HttpClient class provides an asynchronous API for calling HTTP-based
services.

The following code shows an asynchronous service call by using the HttpClient API.

Asynchronous call by using the HttpClient API


var client = new HttpClient();
var response = await client.GetAsync("http://server-2/");
var result = await response.Content.ReadAsStringAsync();

The preceding code uses the await keyword to simplify the call to the asynchronous
HttpClient.GetAsync method. While this code seems sequential during the execution, it is actually
divided into the following steps:
• All the code up to the await keyword is being executed sequentially.

• When calling the HttpClient.GetAsync method, the method immediately returns a task representing
its asynchronous execution and the current thread returns.

• The HttpClient.GetAsync will execute asynchronously.

• When using the await keyword, the C# compiler generates a continuation method that includes all
the code following the await statement. This code will be used as the continuation of the task
returned by the HttpClient.GetAsync method, which is invoked by the Input/Output Completion
Port (IOCP).

Creating an asynchronous action


The await keyword must be used in an asynchronous method. Asynchronous methods are methods that
are declared using the async keyword and return one of the following types: Task, Task<T> or void.
When an ASP.NET Web API action is created as an asynchronous method, ASP.NET Web API runs this
method asynchronously. This means that when the executing thread encounters the first await statement,
it returns the thread, and ASP.NET Web API can use that thread for other calls. After the asynchronous
operation completes, the remaining asynchronous method will run on a thread-pool thread.

The following code sample shows an asynchronous service call run from inside an asynchronous action.

An asynchronous action method


public async Task<string> Get()
{
var client = new HttpClient();
var response = await client.GetAsync("http://server-2/");
return await response.Content.ReadAsStringAsync();
}
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 4-9

Demonstration: Creating Asynchronous Actions


In this demonstration, you will convert an existing method that uses synchronous I/O calls to an
asynchronous method that uses asynchronous I/O calls. As part of the conversion, you will use the async
and await keywords.

Demonstration Steps
You will find the steps in the “Demonstration: Creating Asynchronous Actions” section on the following
page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD04_DEMO.md.

Filters
Middleware are applied early on in the ASP.NET
Core Web API pipeline. This is done before
reaching the controller. This means that any
middleware that is configured will be executed for
every request and response.

Sometimes, a more selective approach is needed.


Filters provide a mechanism to extend the
pipeline for specific actions or controllers.
ASP.NET Core Web API has five different types of
filters, each designed for a different purpose and
executed in a different stage:
• Action filters. These are classes that
implement the ActionFilterAttribute class. Action filters are executed later on in the filter pipeline.
This is done after the authorization filters are executed and after parameter binding takes place. You
can use action filters to extend the ASP.NET Core Web API pipeline in a way similar to middleware.
There are two main differences between action filters and middleware. The first is that action filters
can be applied to specific actions or controllers. The second difference from middleware is the fact
that action filters do not receive an HttpRequest as a parameter. Instead, action filters receive a
parameter of type ActionContext. The ActionContext provides a more complete object model,
which includes access to APIs such as actions arguments, model state, the request, and response and
more.

The following code sample shows an action filter the uses the System.Debugging.Trace call to omit
traces.

A simple action filter


public class TraceFilterAttribute : ActionFilterAttribute
{
public override async Task OnActionExecutionAsync(
ActionExecutingContext context,
ActionExecutionDelegate next)
{
Trace.WriteLine("Trace filter start");

foreach (var item in context.ActionArguments.Keys)


MCT USE ONLY. STUDENT USE PROHIBITED
4-10 Extending ASP.NET Core HTTP Services

Trace.WriteLine(string.Format("{0}: {1}", item,


context.ActionArguments[item]));

await base.OnActionExecutionAsync(context, next);

Trace.WriteLine(string.Format("Trace filter result: {0}", context.Result));


}
}

• Exception filters. These are classes that implement the ExceptionFilterAttribute class and are used to
handle exceptions. Exception filters are executed after the completion of other filters and only if the
Task returned by the filters pipeline is in a faulted state.

The following code sample demonstrates how to create an exception filter.

Simple tracing exceptions filter


public class TraceExceptionFilter : ExceptionFilterAttribute
{
public override Task OnExceptionAsync(ExceptionContext context)
{
return Task.Run(() => Trace.WriteLine(context.Exception));
}
}

• Result filters. These are classes that implement the ResultFilterAttribute class and are like the action
filters but if the action is faulted the filter will not be executed.
The following code sample demonstrates how to create a result filter.

Simple tracing result filter


public class TraceResultFilter : ResultFilterAttribute
{
public override Task OnResultExecutionAsync(ResultExecutingContext context,
ResultExecutionDelegate next)
{
Trace.WriteLine(context.Result);

return base.OnResultExecutionAsync(context, next);


}
}

Model validators
Usually, ASP.NET Core applications store data in
the database. Therefore, you need to validate the
data that comes from the users before doing any
operations. ASP.NET Core has abstractions called
model validators for this purpose. These
abstractions are implemented with attributes that
are derived from ValidationAttribute. There are
built-in attributes for common cases such as
Required, StringLength, and Range.

Declare the Person class with model validations


that validate that the name can have a maximum
of 100 characters, and a birthdate contains date
without the time.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 4-11

Using built-in validation attributes


public class Person
{
public int Id { get; set; }

[Required]
[StringLength(100)]
public string Name { get; set; }

[DataType(DataType.Date)]
public DateTime Birthdate { get; set; }}
}

For custom validation, you can create a custom attribute and use it in the model.

Create a custom model validator checking an integer between two numbers.

Create model validator attribute


public class BetweenAttribute : ValidationAttribute
{
private int min;
private int max;
public BetweenAttribute(int min, int max)
{
this.min = min;
this.max = max;
}
protected override ValidationResult IsValid(object value, ValidationContext
validationContext)
{
int num = (int)value;
if (num >= min && num <= max)
{
return ValidationResult.Success;
}
return new ValidationResult($”{num} is not between {min} and {max}”);
}
}

Media type formatters


Serialization and deserialization are common tasks
when creating and consuming services. .NET Core
offers a variety of serialization mechanisms that
support different formats such as: XML, Binary,
and JSON. However, when creating HTTP-based
services, serialization must be aware of HTTP’s
content negotiation. Content negotiation is
discussed in-depth in Module 3, “Creating and
Consuming ASP.NET Core Web APIs”, Lesson 1,
“HTTP Services” in Course 20487.

ASP.NET Core Web API has built-in support for


content negotiation using media type formatters.
Media type formatters are classes derived from the InputFormatter or OutputFormatter base classes.
Each media type formatter has a property called SupportedMediaTypes that contain all the media types
it supports. When you implement a new media type formatter, you first need to populate this property.
MCT USE ONLY. STUDENT USE PROHIBITED
4-12 Extending ASP.NET Core HTTP Services

The following code demonstrates populating the SupportedMediaTypes property inside a media type
formatter’s constructor.

Adding support for media types in a media type formatter


public class CsvFormatter : InputFormatter
{
public CsvFormatter()
{
this.SupportedMediaTypes.Add("text/csv");
}
}

Sometimes, the same media type can be supported only by specific types. For example, images might be
a valid media type when requesting a resource for an employee in a company, but not for a department.
The InputFormatter class has the CanReadType and OutputFormatter has the CanWriteType abstract
methods that can be used to define which types can be read or written using the specific media type
formatter.

Finally, you can implement the actual process of reading or writing the data using the
ReadRequestBodyAsync and WriteResponseBodyAsync methods.
The following code demonstrates the use of the WriteResponseBodyAsync method to provide a list of
employees using the CSV file format.

Implementing the WriteToStreamAsync method


public override async Task WriteResponseBodyAsync(OutputFormatterWriteContext context)
{
var employees = context.Object as IEnumerable<Employee>;

var response = context.HttpContext.Response;


// Write CSV header
await response.WriteAsync("Full Name, Employee ID\r\n");
if (employees != null)
{
foreach (var employee in employees)
{
response.WriteLine("{0},{1}", employee.FullName, employee.ID);
}
}
}
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 4-13

Lab: Customizing the ASP.NET Core Pipeline


Scenario
In this lab, you will customize the ASP.NET Core Pipeline.

Objectives
After you complete this lab, you will be able to:

• Add inversion of control by using Dependency Injection to the project.

• Create a cache mechanism and action filters.

• Add middleware to inform the client through header response.

Lab Setup
Estimated Time: 30 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD04_LAB_MANUAL.md.

You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD04_LAK.md.

Exercise 1: Use Dependency Injection to Get a Repository Object


Scenario
Implement the inversion of control design pattern and use dependency injection.

Exercise 2: Create a Cache Filter


Scenario
Implement action filter that cache the result of the service.

Exercise 3: Create a Debugging Middleware


Scenario
Implement ASP.NET Core middleware that return the execution time of the service.
MCT USE ONLY. STUDENT USE PROHIBITED
4-14 Extending ASP.NET Core HTTP Services

Lesson 3
Injecting Dependencies into Controllers
Most applications usually consist of several components that are dependent on each other. It is important
to be able to replace the implementation of a dependent module without having to change the code that
uses the dependency. To do this, you first need to decouple the software components from the other
components they are dependent on. This lesson describes how to decouple dependent components from
their dependencies. The lesson also explains how you can use the IServiceCollection interface in ASP.
NET Core Web API to implement dependency injection.

Lesson Objectives
After completing this lesson, students will be able to:

• Describe how dependency injection works.

• Describe how to use the ASP.NET Core Web API dependency injection.

• Use a dependency injection.

Dependency injection
Modern software systems are built out of different
software components. For example, many
distributed applications use a layered architecture
that separates different responsibilities to different
components (Logical Layers of Distributed
Applications are discussed in Module 1, “Overview
of Service and Cloud Technologies,” Lesson 1,
“Key Components of Distributed Applications”).
Dependency injection is a common software
design pattern that is used to decouple software
components from other components they are
dependent on. This is done so that dependencies
could be easily replaced if needed. For example, it is common to replace the dependencies during tests
with a mock object in order to control the result they return.

At the core of the dependency injection design pattern, there are three types of components:

• The dependent component. A software component that is dependent on other components to


execute.

• Dependencies. These are software components that the dependent component is depended upon.

• Injector. A component that obtains or creates instances of the dependencies and passes them to the
dependent component.

In order for the dependent component to be decoupled from its dependencies, it should only define
them as interfaces. The dependencies should be passed into the dependent component as method or
constructor parameters by the injector, allowing the injector to replace the concrete implementation of
the dependency at runtime.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 4-15

Declaring injection rules in the ASP.Net startup component


ASP. NET Core Web API supports dependency
injection with the IServiceCollection interface.
You can register the dependencies in the
ConfigureServices method in Startup class using
the Add method. The Add method has a
parameter of type ServiceDescriptor with the
following properties:

• Service type. This is the type of interface that


represents the service.

• Service implementation type. This is type of


service implements the service type interface.

• Lifetime. This is the lifetime the of the service that will be created: There are three kinds of lifetime:

o Singleton. The service will have one instance while the application is running.
o Scoped. The service will have one instance for each request.

o Transient. The service will be created each time it is requested.

Registering services in the ConfigureServices method in the Startup class using the Add method and the
AddXXX extension methods.

Registering dependencies
public void ConfigureServices(IServiceCollection services)
{
services.Add(new ServiceDescriptor(typeof(IService), typeof(ServiceImple),
ServiceLifetime.Singleton));
services.AddSingleton<IService1, ServiceImpl1>();
services.AddScoped<IService2, ServiceImpl3>();
services.AddTransient<IService3, ServiceImpl3>();
}

Using dependency injection in controllers


After registering dependencies, ASP.NET Core
provides two ways to resolve dependencies in the
controller:

• Constructor injection. When the same


dependency is used in most actions in the
controller, the dependency can be resolved
once in the constructor by adding a
parameter.
Resolve a dependency in the constructor by
adding a parameter.
MCT USE ONLY. STUDENT USE PROHIBITED
4-16 Extending ASP.NET Core HTTP Services

Constructor injection
public class ValuesController : Controller
{
private IService service;

public ValuesController(IService service)


{
this.service = service;
}
}

• Action injection. When a specific action needs a dependency, it can be resolved by adding a
parameter to the action and decorating it with the FromServices attribute.

Resolve a dependency in the action method by adding a parameter decorated with FromServices
attribute.

Action injection
public class ValuesController : Controller
{
[HttpGet]
public IEnumerable<string> Get([FromServices]IService service, string name)
{
return new string[] { "value1", "value2" };
}
}

Demonstration: Using Dependency Injection with Controllers


Demonstration Steps
You will find the steps in the “Demonstration: Using Dependency Injection with Controllers “ section on
the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/blob/master/Instructions/20487D_MOD04_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 4-17

Module Review and Takeaways


In this module, you learned how the ASP.NET Core Web API request and response pipeline is structured,
and how to extend the response pipeline. You also learned how to use dependency injections with
ASP.NET Core Web API controllers.
MCT USE ONLY. STUDENT USE PROHIBITED
4-18 Extending ASP.NET Core HTTP Services
MCT USE ONLY. STUDENT USE PROHIBITED
5-1

Module 5
Hosting Services
Contents:
Module Overview 5-1

Lesson 1: Hosting services on-premises 5-3

Lab A: Host an ASP.NET Core Service in a Windows Service 5-7

Lesson 2: Hosting Services in Azure Web Apps 5-8

Lab B: Host an ASP.NET Core Web API in an Azure Web App 5-16
Lesson 3: Packaging services in containers 5-17
Lab C: Host an ASP.NET Core service in Azure Container Instances 5-30

Lesson 4: Implementing serverless services 5-31


Lab D: Implementing an Azure Function 5-44
Module Review and Takeaways 5-45

Module Overview
The most important aspect of implementing a service is hosting it so that clients can access it. For
Microsoft ASP.NET Core services, the host is responsible for allocating all the resources required for the
service. The host opens listening ports, creates an instance of a service when a request arrives, and
allocates memory and threads as required. If the host fails, the service fails. There is a one-to-one
dependency between the host and the service. The reliability and performance of the host directly affect
the quality of the service.

You can self-host your ASP.NET Core services. In this module, you will explore the various ways of hosting
your services on-premises and on Azure, and the benefits each type of host provides, in relation to issues
such as reliability, performance, and durability.

Apart from deciding the type of hosting service to use, web-hosted or self-hosted, you also need to think
about the hosting environment for your service - whether on-premises or in the cloud platform.
Considerations for deciding which environment to use include:

• Specific hardware requirements. When you host services on-premises, you have more control over the
hardware of your server than in the cloud platform. In the latter case, you only know how many
Central Processing Units (CPUs), memory, and disk space your virtual machines have.

• Scaling requirements. When you host services on-premises, it requires usage prediction and servers.
Other than the costs involved with over-provisioning, on-premises hosting can also be impacted by
under-provisioning caused by rapid growth and unpredictable increase in demand. Hosting your
services in the cloud environment makes your servers available by using the elasticity of the cloud
platform to scale out when more resources are required.
MCT USE ONLY. STUDENT USE PROHIBITED
5-2 Hosting Services

• Legal requirements. In some countries, certain types of data, such as personal data, can only be stored
within the boundaries of the country. For on-premises hosting, this is achieved easily, but when you
host your services and data in the cloud platform, your data might be copied between data centers in
different locations on the globe, for reasons such as availability and backup.

Your decisions related to hosting type and hosting environment, although seemingly independent of each
other, can affect each other. For example, if you choose to host your services in the Microsoft Azure cloud
environment, you need to choose between hosting your services in Azure Web Apps or Docker containers,
or use Azure Functions.

Note: The Azure portal UI and Azure dialog boxes in Microsoft Visual Studio 2017 are
updated frequently when new Azure components and SDKs for .NET are released. Therefore, it is
possible that some differences will exist between screenshots and steps shown in this module and
the actual UI you encounter in the Azure portal and Visual Studio 2017.

Objectives
After completing this module, you will be able to:
• Host services on-premises by using Windows services and Microsoft Internet Information Services
(IIS).

• Host services in the Azure cloud environment by using Web Apps, Docker containers, and Azure
Functions.

• Package services in containers.


• Implement serverless services.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-3

Lesson 1
Hosting services on-premises
When you want to host a web service on-premises, you can host it by using a Windows service or IIS. A
Windows service is a long-running application that runs in the background. Windows services have no
user interfaces, and they do not produce any visual output. Services run in the background while a user
performs any other task in the foreground, but they also run when a user is not logged on. This makes
Windows services a good candidate for classic server applications, such as an email server or a File
Transfer Protocol (FTP) server.

Running a Windows service without a user interface poses a debugging and operations challenge because
the user is not notified about warnings or errors. To overcome this, Windows services use the Windows
Event Log service and other logging frameworks to record tracing information and to notify the system
administrator about error conditions.

IIS is a Windows Web server that hosts web applications.


This lesson explains how to host ASP.NET Core services in a Windows service, and how to host ASP.NET
Core services in IIS.

Lesson Objectives
After completing this lesson, you will be able to:

• Explain how to host ASP.NET Core services in a Windows service.


• Explain how to host ASP.NET Core services in IIS.

• Compare ASP.NET Core service hosting in Windows services and IIS.

Self-hosting ASP.NET Core services in Windows services


You can use self-hosted console applications to
host your ASP.NET Core services. Hosting your
service in a self-hosted console application is
useful for quick proof-of-concept projects and
testing, but in real-world scenarios, you should
host your ASP.NET Core services in a background
process that cannot be shut down by the user
easily. Windows services allow you to run your
ASP.NET Core services in a background process.

Windows services are processes that the Windows


operating system deploys and manages. A
Windows service process runs in the background
without any UI, making it transparent to the user who may be unaware of the existence of the service
altogether. Windows services offer a suitable approach for implementing long-running processes that do
not require user interaction, and therefore are very useful for hosting your ASP.NET Core services.
The Windows operating system manages the loading and execution of Windows services. In addition, the
Microsoft Services Management Console is a UI tool that you can use for managing Windows services and
their configuration settings. To open the Microsoft Services Management Console, open Control Panel
from the Start screen, click Administrative Tools, and then click Services.
MCT USE ONLY. STUDENT USE PROHIBITED
5-4 Hosting Services

The following image is a screenshot of the Microsoft Services Management Console.

FIGURE 5.1: MICROSOFT SERVICES MANAGEMENT CONSOLE


You can configure a Windows service to start automatically when the system finishes booting, and to
restart in case a failure occurs. Both are useful features for ASP.NET Core service hosts. Another difference
between a Windows service and a foreground application started by the user is that a Windows service
runs within a security context that is different from the security context of the user. By default, a
specialized local identity, such as network service or local system, is used to run Windows services, but you
can change it to suit your needs.

For more information about service user accounts, refer to the following link.

Service user accounts.


https://aka.ms/moc-20487D-m5-pg1

Hosting an ASP.NET Core service in a Windows service


To host an ASP.NET Core service in a Windows service, you need to perform the following steps:

1. Create an ASP.NET Core project for .NET Framework.

2. Add <RuntimeIdentifier>win7-x64</RuntimeIdentifier> in the .csproj file under the


TargetFramework element.

3. Install the Microsoft.AspNetCore.Hosting.WindowsServices package.

4. Change the Main method to call RunAsService instead of Run.

5. Open command prompt with administrator privileges.


6. Publish the app using the dotnet publish command.

7. Install the service using the sc create MyService binPath="PATH_TO_EXE" command.


8. Start the service using the sc start MyService command.

For more information on hosting ASP.NET Core on Windows service


https://aka.ms/moc-20487D-m5-pg2
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-5

Demonstration: Hosting Services On-Premises by using Windows Services


with Kestrel (RunAsService)
In this demonstration, you will publish a web app into a folder, create a new Windows service, and host
your app there by using a command line.

Demonstration Steps
You will find the steps in the “Demonstration: Hosting Services On-Premises by using Windows Services
with Kestrel (RunAsService)” section on the following page:

https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD05_DEMO.md

Hosting ASP.NET Core services in IIS


By hosting ASP.NET Core services in IIS, you can
benefit from reliability features offered by the
application pools and worker processes in IIS, such
as process recycling, health monitoring, message-
based activation, and idle shutdown.
You can use IIS to host multiple services and
isolate them from one another by using the IIS
application pool configuration and the worker
process mechanism. IIS provides better hosting by
managing the health of your service through the
application pool configuration. IIS performs
various actions on your service, such as the
following:

• Shuts down your service if it is idle for a long time, to conserve resources.
• Starts your service after shutdown, when a message arrives.

• Recycles your service if it uses too much CPU or memory over time.

• Protects your service with rapid fail protection if your service fails or is unresponsive for a long time.

IIS uses a hierarchical-style directory management, where each virtual directory maps to a folder in the file
system. This virtual directory contains static files such as images and web pages, in addition to web
applications such as ASP.NET Core services. Because of the ability of the IIS to host multiple web
applications on a single server, you can deploy several ASP.NET Core services to IIS, each of them running
independent of each other.

Note: When different web applications share the same application pool, these applications
also share the same worker process. If one of the services causes its worker process to fail (for
example, because of a critical exception), all the hosted applications in the worker process will
also fail. To prevent such a scenario, consider separating web applications into different
application pools.

For more information about IIS architecture, refer to the following link.

Introduction to IIS Architecture


https://aka.ms/moc-20487D-m5-pg3
MCT USE ONLY. STUDENT USE PROHIBITED
5-6 Hosting Services

For perquisite installation refer to the following link:


https://aka.ms/moc-20487D-m5-pg4

Demonstration: Hosting ASP.NET Core Services in IIS


In this demonstration, you will host your ASP .NET Core App in IIS, which is one of many ways to host an
ASP .NET Core app.

Demonstration Steps
You will find the steps in the “Demonstration: Hosting ASP.NET Core Services in IIS” section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD05_DEMO.md.

Compare service hosting in Windows Services and IIS


To decide whether to host your Windows
Communication Foundation (WCF) application in
a Windows service or in IIS, you should be aware
of the advantages and disadvantages of each
method.

Windows Service IIS

Lifetime Service process lifetime is IIS shuts down idle services to improve
controlled by the operating resource management. The service is
system, and is not message- reactivated when a message is
activated. received.

Health Management No health management Services are monitored and recycled


when an error occurs.

Endpoint Address Configured in the app.config file Bound to the IIS virtual directory path,
which contains the .svc file.

Deployment Requires installation by using Can be published from Visual Studio


installutil.exe. 2017 into IIS or into a package for
future deployment.

For more information about hosting options, refer to the following link.
Hosting Services
https://aka.ms/moc-20487D-m5-pg5
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-7

Lab A: Host an ASP.NET Core Service in a Windows


Service
Scenario
In this lab, you will learn how to create a new ASP .NET Core project and use Kestrel - Run As Service to
host this project in a new Windows service.

Objectives
After completing this lab, you will be able to:

• Create a new ASP.NET Core Web API service.


• Register a new Windows service.

Lab Setup
Estimated Time: 15 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD05_LAB_MANUAL.md.

You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD05_LAK.md.

Exercise 1: Creating an ASP.NET Core Application


Scenario
In this exercise, you will create a new Web API via ASP.NET Core by using the command prompt.

Exercise 2: Registering the Windows Service


Scenario
In this exercise, you will create a new Windows service, start and stop it by using the command prompt
with the sc tool.
MCT USE ONLY. STUDENT USE PROHIBITED
5-8 Hosting Services

Lesson 2
Hosting Services in Azure Web Apps
Module 1: “Overview of Service and Cloud Technologies” discussed various Azure Cloud Services, which
help develop and publish web applications at a global, distributed scale. Because web services and web
applications are instrumental to every system’s success, Azure has a number of ways to host web services,
based on your specific requirements. It can be as easy as writing a few lines of code in a single function
and deploying that function to Azure, or as flexible as building a complete virtual machine environment
running your favorite web server and web application.

In this lesson, you will explore the options available for hosting ASP.NET Core web services in Azure and
focus specifically on web apps running in Azure App Service. You will also explore the Azure App Service
features for sizing, scaling, and publishing your web service, and which other languages and platforms are
supported on Azure App Service, in addition to ASP.NET Core.

Lesson Objectives
After completing this lesson, you will be able to:

• Explain the various hosting options for web services in Azure.


• Describe the Azure App Service features for hosting web services.

• Develop and publish ASP.NET Core web services to Azure App Service.

Hosting options for ASP.NET Core web services in Azure: Azure Web Apps,
Azure Container services, Azure Functions
As mentioned earlier, there are numerous ways to
host web services on the Azure cloud platform,
which greatly vary in complexity, flexibility, and
customization options. Azure provides both PaaS
(platform-as-a-service) and IaaS (infrastructure-as-
a-service) options for hosting web services, but
this lesson will focus mostly on PaaS. The
advantages of having the Azure cloud platform
managing the underlying infrastructure are
numerous: you don’t have to worry about
provisioning virtual machines and installing web
server software, you don’t have to worry about
operating system updates, you can very easily scale your application without manually deploying
additional machines, and so on.

The main alternatives for hosting a web service or a web application on Azure are:

• Azure Web Apps (Azure App Service). A powerful and flexible PaaS platform that provides automatic
scaling, easy deployment, and a fair amount of customization options. You don’t control the
underlying infrastructure and operating system, but you do control your application’s deployment,
dependencies, and configuration. You can size your service according to its current workload
demands, and then scale it to more powerful machines or a larger number of instances with only a
click of a button.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-9

• Azure Functions. A scalable and flexible serverless platform. You give up additional control over the
underlying platform, but gain the ability to develop your application as a set of tiny building blocks
that are very easy to test in isolation and compose together. Azure Functions are discussed in Lesson
4: ‘Implementing serverless services’.

• Azure Container Service. An orchestration platform for applications deployed using containers. You
package your application and its dependencies into a container image, which can run anywhere and
scale as necessary. Containers and Azure Container Instances are discussed in Lesson 3: ‘Packaging
services in containers’.

• Microsoft Azure Virtual Machines. An IaaS platform for running virtual machines with predefined or
customized images. You have complete control over the execution environment, including unfettered
admin access to the target machines. On the other hand, you have to worry about operating system
updates, installations, dependencies, scaling, and more.

• Azure Cloud Services. A deprecated PaaS offering for running service deployments on top of virtual
machines managed by the Azure platform. Although Azure Cloud Services were among the first
features of the Azure platform, it is recommended that most customers migrate to either higher-level
PaaS solutions, such as using Azure App Service, or lower-level IaaS solutions with Azure Virtual
Machines.

For more information on publishing an ASP.NET web application to Azure Virtual Machines,
refer to the following tutorial.
https://aka.ms/moc-20487D-m5-pg6

When choosing a hosting environment for your web application or web service on Azure, there are many
areas of overlap. After completing this module, you will have a better picture of the available offerings
and how they can be customized and adapted to your system’s needs. Some general things to consider
include:

• Do you need a complete control over the execution environment, such as Remote Desktop or SSH
access? If so, consider using Azure Virtual Machines.

• Can you build your application as a set of standalone, independent, scalable functions that rely on
other Azure services for state storage? If so, consider using Azure Functions.

• Do you already package or plan to package your application as a Docker container? Are you
considering using container orchestration platforms such as Kubernetes? If so, consider using Azure
Container Service.

• Are you building a standard web application or service (API) that doesn’t meet any of the above
criteria? If so, consider using Web Apps feature of the Azure App Service.
MCT USE ONLY. STUDENT USE PROHIBITED
5-10 Hosting Services

Introducing Azure Web Apps – tiers, platform support, machine sizes,


auto-scale
Azure App Service is a collection of services that
you can use to build web applications, web
services, serverless applications, mobile APIs, and
many other types of apps, in a variety of
programming languages and frameworks. Azure
Web Apps (part of Azure App Service) is a
platform specifically focused on hosting web
applications and web services without having to
manage the infrastructure. You can enjoy
automatic scaling, automatic deployments from
source control systems, multiple deployment slots,
smart monitoring and logs, and many other
features — without deploying and managing virtual machines, web servers, and operating system
updates.

The key features and benefits of Azure Web Apps include:


• Convenient deployment from various sources, including Visual Studio 2017, GitHub, and Docker Hub.
You can manually publish web services from your development environment, or configure an
automatic workflow that deploys the service to the production environment on every successful
commit or build. When necessary, you can even use FTP or a command-line shell in the browser to
copy files or troubleshoot issues.
• Manual and automatic scaling. You can easily change the instance type assigned to your service and
create multiple instances; you can also configure rules that will scale your application automatically.
(Scaling Azure Web Apps is further discussed in Module 10: ‘Scaling Services’, Lesson 2: ‘Automatic
Scaling.’)

• Support for multiple languages and development frameworks. You can use ASP.NET Core, Node.js,
Java, Python, PHP, and Ruby—and these are just the officially supported runtimes. For the target
platform, you can choose between Windows IIS and Linux.
• Powerful monitoring and diagnostics platform. Includes automatically-collected performance metrics,
support for diagnostic log streaming, web server log collection, and a troubleshooting console that
can inspect files, processes, and other types of information on the actual machine running your
service.

Additional useful features include SSL support, custom domains, IP address restrictions, integrations with
other Azure services, security and compliance, and many others.

For more information about Azure Web Apps features, see:


https://aka.ms/moc-20487D-m5-pg7

Note: Although it appears that the Azure App Service provides a granular control over the
target machine, there are actually some restrictions on what your code can do in the
environment, even if you’re using a plan that assigns you a dedicated virtual machine. For
example, the user account that runs your application is not assigned administrator privileges,
which means there are some types of privileges not available to it. The capabilities that are not
available include full Windows registry access, using Event Tracing for Windows, and
reconfiguring low-level network settings.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-11

For more information about the operating system functionality available to application and
services in Azure App Service, refer to the following link.
https://aka.ms/moc-20487D-m5-pg8

When you create a new Azure Web App, you also create (or choose) an Azure App Service plan in which it
runs. You can share multiple applications and services in a single App Service plan. The plan defines in
which region your computer resources will be provisioned, how many machines will serve your traffic, and
the size of these machines. There are multiple pricing tiers that you can choose from, which determine the
resources and features available to your application:

• Free. This is the lowest tier. Your application or service runs on a shared machine, which also runs
other customers’ applications. Your service is assigned a CPU quota (60 minutes a day), memory
quota, disk space quota, and network traffic quota that you cannot exceed. When the quota is
exceeded, your application becomes unavailable until the end of the billing period. You should only
use the Free tier for development and testing, or for very low-traffic applications and services.
• Shared. Your application or service still runs on a shared machine, and is still assigned a CPU quota of
240 minutes a day that you cannot exceed. Other resources, such as networking, are charged using
standard pricing. You also get access to some features, such as custom domains for your service,
which are not available in the Free tier.

• Dedicated. Your application or service runs on dedicated virtual machines, shared only with other
apps in your App Service plan. You can control the size of the machines (see below) and the number
of machines to deploy, up to 20 instances. Even more features are available in this tier, such as
deployment slots (discussed in Module 6: “Deploying and Managing Services, Lesson 5: “Deploying to
Staging and Production”), Traffic Manager integration (discussed in Module 10: ‘Scaling Services’,
Lesson 3: ‘Azure Application Gateway and Traffic Manager’), automatic backups, and others.

• Isolated. Similar to the Dedicated plan, but your virtual machines are also part of a separate Microsoft
Azure Virtual Network, which means they are isolated in terms of network and not only computers.
You can further control the size and number of the machines, up to 100 instances.
When using the Dedicated and Isolated tiers, you can control the size of the virtual machines running
your application and the number of machines that are created for you. Below are some examples:

• In the Dedicated - Basic plan, you can scale to up to 3 machines. The machine size ranges from 1
core with 1.75 GB RAM to 4 cores with 7 GB RAM.

• In the Dedicated - Standard plan, you can scale to up to 10 machines. The machine size ranges from
1 core with 1.75 GB RAM to 4 cores with 7 GB RAM.

• In the Dedicated - Premium plan, you can scale to up to 20 machines. The machine size ranges from
1 core with 3.5 GB RAM to 4 cores with 14 GB RAM. The CPUs in these machines are faster than those
in the Standard plan, and they are equipped with SSD storage.
• In the Isolated plan, you can scale to up to 100 machines. The machine size ranges from 1 core with
3.5 GB RAM to 4 cores with 14 GB RAM. There is also an additional flat fee for each App Service
Environment when using this plan.

For more information about the features included in each Azure App Service pricing plan,
refer to the following link:
https://aka.ms/moc-20487D-m5-pg9
MCT USE ONLY. STUDENT USE PROHIBITED
5-12 Hosting Services

For more information on Azure App Service pricing, refer to the following link:
https://aka.ms/moc-20487D-m5-pg10

The following image shows the Web App Create dialog box.

The following screenshot illustrates the process of creating a new Azure App Service plan. You specify the
location and the pricing tier for the plan and can choose the exact combination of resources and prices
that you require.

FIGURE 5.3: A SCREENSHOT OF THE NEW APP SERVICE


PLAN WINDOW.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-13

The following screenshot shows the Overview blade for the newly created web app. Note the deployment
details on the right, which you can use to deploy through FTP or configure other forms of deployment.

FIGURE 5.4: THE OVERVIEW BLADE FOR THE NEWLY


CREATED WEB APP
The following screenshot shows the Scale out blade in the App Service settings, which you can use to
change the number of instances assigned to your App Service plan.

FIGURE 5.5: THE SCALE OUT BLADE IN THE APP SERVICE SETTINGS

Developing for Web Apps


As explained in the previous topic, you can
develop web apps in various languages and
publish them to Azure Web Apps. This includes
ASP.NET Core web applications and services. You
can develop your application locally using the
familiar Visual Studio 2017 environment, and
publish the application to Azure Web Apps with
the click of a button. By using the Visual Studio
Publish dialog box, you can create a new Azure
Web App and deploy your application to it, or
deploy to an existing Azure Web App that you
provisioned by some other means.

Note: By using Visual Studio 2017, you can also integrate your application with additional
Azure services, such as Azure SQL Database, discussed in Module 7: ‘Implementing Data Storage
in Azure’, Lesson 3: ‘Working with Structured Data in Azure’. Additional deployment options from
Visual Studio to Azure App Service deployment slots are discussed in Module 6: ‘Deploying and
Managing Services’, Lesson 5: ‘Deploying to Staging and Production’.
MCT USE ONLY. STUDENT USE PROHIBITED
5-14 Hosting Services

The following screenshot shows the Visual Studio Publish dialog box, configured to create a new
Azure Web App and publish the current project to it.

FIGURE 5.2: THE VISUAL STUDIO PUBLISH DIALOG BOX


The following code example shows how an ASP.NET Core application can read a value from the
configuration data merged with environment variables.

Reading configuration data


[Route("api/{controller})"]
public class FlightsController : Controller
{
private readonly IConfiguration _configuration;

// The 'config' parameter is injected by ASP.NET Core


public FlightsController(IConfiguration config)
{
_configuration = config;
}

[HttpGet]
public Flight GetFlightById(string id)
{
bool isProduction = bool.Parse(_configuration["IsProduction"]);
// … The rest of the code
}
}

For more information on how to merge the environment variables into the configuration
available to your ASP.NET Core application, refer to the following link:
https://aka.ms/moc-20487D-m5-pg11
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-15

The following screenshot illustrates how you can modify environment variables and application
configuration settings for your Azure Web App.

FIGURE 5.3: THE APPLICATION SETTINGS TAB

Demonstration: Hosting ASP.NET Core Web APIs in Web Apps


In this demonstration, you will see how to create Azure Web App in Azure portal and host ASP.NET Core
application in Azure Web App then access the service in the browser.

Demonstration Steps
You will find the steps in the “Demonstration: Hosting ASP.NET Core Web APIs in Web Apps“ section on
the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/blob/master/Instructions/20487D_MOD05_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
5-16 Hosting Services

Lab B: Host an ASP.NET Core Web API in an Azure Web


App
Scenario
In this lab, you will explore several frameworks and platforms, such as Entity Framework Core, ASP.NET
Core Web API, and Microsoft Azure that are used for creating distributed applications.

Objectives
After completing this lab, you will be able to:

• Create a Web App in Azure Portal.

• Host an ASP.NET Core application in Azure Web App.

Lab Setup
Estimated Time: 30 minutes

You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD05_LAB_MANUAL.md.

You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD05_LAK.md.

Exercise 1: Creating a Web App in the Azure Portal


Scenario
In this exercise, you will create a new Web App in Azure portal.

Exercise 2: Deploying an ASP.NET Core Web API to the Web App


Scenario
In this exercise, you will deploy the new Web App in Azure portal.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-17

Lesson 3
Packaging services in containers
Since their popularization by Docker Inc., containers have quickly become the de-facto industry standard
for packaged software delivery. By using containers, developers can package the application along with all
its dependencies, while administrators can deploy and monitor the application across a variety of
infrastructures.

In this lesson, you will explore the benefits of container technologies, the fundamentals of Docker
containers, and Docker integration in Visual Studio 2017. You will use Visual Studio 2017 to create our first
Docker container running an ASP.NET Core application and publish it to Azure Container Instances,
Microsoft’s lightweight cloud solution for hosting individual containers quickly and efficiently.

Lesson Objectives
After completing this lesson, you will be able to:
• Explain how OS virtualization differs from hardware virtualization.

• Describe the benefits of container technologies.

• Create Docker container images and run container instances.


• Deploy Docker containers to container image registries.

• Run ASP.NET Core applications in Docker containers.

• Publish ASP.NET Core applications into Azure Container Instances.

OS virtualization and hardware virtualization


You can use containers to package your software
application or service, ship it between different
environments, deploy it, and run it at scale.
Containers represent a significant shift in the
software industry: from hardware virtualization to
OS virtualization.

Hardware virtualization is a mechanism that


abstracts away the underlying hardware and runs
multiple operating system instances on a single
set of hardware. As a result, applications running
in these operating system instances are
completely isolated from one another, which is
great for security and resource limiting. On the other hand, there is a lot of duplication when deploying
software in virtual machines: if two applications that use .NET Core on Windows Nano Server are deployed
in two separate virtual machines, the Windows files and .NET installation need to exist twice on the target
machine.
OS virtualization is a mechanism that abstracts away the operating system kernel, and runs multiple
processes (containers) in isolation from each other, while sharing a single kernel. Applications running in
containers are isolated from each other through the use of numerous operating system abstractions. As a
great benefit, there is no duplication of files when deploying software in containers: if two applications
MCT USE ONLY. STUDENT USE PROHIBITED
5-18 Hosting Services

that use Java on Ubuntu Linux are deployed in two separate containers, the Ubuntu files and the Java
Virtual Machine installation will be shared between the two containers on the disk.

On the other hand, the isolation provided by container runtimes is not as hermetic as that provided by
hardware virtualization; from many perspectives, containers should not be considered a security
boundary.

Note: Historically, containers have been available in one shape or another for several
decades. For example, Solaris zones were released in early 2005, and are a fairly comprehensive
containerization technology. The Linux kernel mechanisms used by Docker (control groups,
namespaces, and security modules) were also available for several years before Docker’s
popularity exploded. Still, Docker was able to bring containers into the mainstream by creating
an easy-to-use solution that makes container technology approachable and usable by typical
developers and administrators.

The operating system primitives that isolate containers from each other are different from the primitives
that isolate virtual machines. Each container has its own view of the file system, its own list of processes,
and its own network interfaces, even though at the operating system level, these are all shared between
containers. The mechanisms for restricting container resource utilization are also different from those used
for virtual machines. You can limit the CPU usage of a container (For example, assign 50% of one CPU
core to a container), the memory usage of a container, and even the disk reads and writes it can perform
to a specific disk device. These mechanisms are provided by the operating system. However, when using
virtual machines, these would have to be provided by the hardware virtualization mechanism (a
hypervisor).

Types of Virtualization Technologies


https://aka.ms/moc-20487D-m5-pg12

Benefits and use cases of container technologies


Container technology has become popular
because of numerous benefits you can gain from
packaging your application in containers, as
opposed to direct (bare metal) deployment, or
using hardware virtualization. These benefits
include:

• Lightweight, high-density packaging.


Containers share the same operating system
kernel and can share a lot of the on-disk files
with each other. As a result, they can start
faster and consume fewer memory resources.
This results in higher density deployments.
You can co-locate many more containers on a single machine than you can on virtual machines that
use hardware virtualization. It is also easier to scale container-based deployments up and down
because it only takes seconds to bring up dozens of new container instances to service new traffic,
and then take them down as the load subsides. Because of their fast startup times, containers are also
suitable in situations when a container needs to run for only a few seconds, finish its job, and
disappear.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-19

• Consistent environment from dev to prod. Developers can package software and versioned
dependencies into a container image that is used consistently in the development environment, in
testing, in staging, and in production. This results in productivity gains, as teams don’t spend time
diagnosing issues resulting from environmental differences or having different versions of software
dependencies and libraries.

• Isolation. Although containers are lightweight and share numerous resources with each other, they
are still quite isolated. Containers cannot accidentally access each other’s files, processes, or in-
memory objects; they also can’t send each other network traffic without explicit configuration.

• Infrastructure as code. The Docker image format makes it possible to create a container image that
can be run anywhere, on any operating system or distribution. All major cloud providers, including
Microsoft Azure, provide services for hosting and orchestrating container-based deployments. To
create Docker images, you will often use Dockerfiles, which is a simple text language that describes a
step-by-step process of creating a container image from a base image by adding files, installing
required software packages, and setting environment variables.

Docker image format


https://aka.ms/moc-20487D-m5-pg13

Among the ways containers are used by leading companies today are the following use-cases,
highlighting the unique advantages of container-based systems:

• Distributed applications and micro-services. Containers make it easy to deploy your system as a mesh
of interconnected micro-services, responsible for small slices of your application’s functionality. Each
micro-service can be developed, tested, deployed, and versioned independently of the others, and
isolated using container technology.
• Batch jobs. You can create a standalone batch job and package it into a container image. The
resulting container image can then be deployed across a variety of pipelines, and run in parallel very
easily.
Continuous integration (CI). You can use containers in your CI/CD pipeline to build your application, test it
in isolation, and then deploy it to production with the same consistent environment used by developers
on their personal machines. The build artifact from the CI pipeline can be a versioned container image,
which can be deployed elsewhere for debugging and reproducibility when required.

Docker files and commands


Docker Inc. provides a packaged software product
that can be installed on Windows, Linux, and
macOS and helps create, run, diagnose, and
deploy containerized applications. The core of
Docker Inc.’s product is an open source project
called Moby, with contributions from various
organizations, including Microsoft, Google, and
Docker Inc.
MCT USE ONLY. STUDENT USE PROHIBITED
5-20 Hosting Services

Moby project and its relationship with Docker Inc.


https://aka.ms/moc-20487D-m5-pg14

When you install Docker on your machine, the Docker engine runs in a service process, and you
can interact with it using the Docker client application, which is a command-line tool
(docker.exe on Windows, docker on other platforms). Other ways to interact with the Docker
engine include Kitematic (a GUI application) and Visual Studio 2017. In the next topic, you will
learn that Visual Studio 2017 has a comprehensive set of tools for interacting with Docker, which
you can use when developing and subsequently publishing your ASP.NET Core application.

The Kitematic main screen on Windows is shown below.

FIGURE 5.4: THE KITEMATIC MAIN SCREEN ON WINDOWS

To learn more about Kitematic and download it to your machine, visit the following link.
https://aka.ms/moc-20487D-m5-pg15

In Docker’s terminology, a container image is a packaged application with all its dependencies,
configurations, and executable code. It is implemented as a simple tar archive, which—if extracted—forms
a set of one or more files on disk. A container or a container instance is a running instance of an image;
you can create multiple instances from the same image on a single machine or on multiple machines.

Some of the key Docker commands include the following:


docker run. Creates a new container from the specified container image and runs it on your machine. The
container can run in the background (as a kind of background service) or run interactively, receiving
terminal input.

The following command launches the hello-world container image in your terminal and displays its
output.

Launching the hello-world container command


docker run hello-world
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-21

docker ps. Lists the currently running containers, including their names, network ports, and identifiers.

docker kill. Kills a currently running container, but doesn’t delete the container files from disk. A
container killed with docker kill can be subsequently restarted with docker start.
docker rm. Deletes a container’s files from disk. This action cannot be undone. If you need your container
to store persistent data, you should use volumes, which are outside the scope of this course.

Docker volumes

To learn more about Docker volumes, visit the following link:


https://aka.ms/moc-20487D-m5-pg16

You can store Docker images only on your local machine, but this is uncommon. In most cases, a
container registry stores versioned container images that can be pulled to a machine for execution. You
can push an image to a container registry after building it. As discussed in the previous topic, a common
workflow is to have your CI build server produce a versioned container image and push it to the container
registry used by the rest of your infrastructure, including the deployment process.

Before you push a container image to a registry, you need to tag it. The docker tag command will add a
tag to an image, and the docker push command will push it to a container registry. The default container
registry is Docker Hub, which you can use to store an unlimited number of container images for free as
long as they are publicly accessible. Private container registries are also available from multiple vendors,
including Microsoft’s Azure Container Registry. You can even run your own registry in a container using
Docker’s official registry container image.

Docker Hub
https://aka.ms/moc-20487D-m5-pg17

Selecting the right container image has a big effect on your disk and memory footprint. For example,
using a .NET Core SDK image with build tools when you only need the .NET Core runtime is wasteful, and
can take hundreds of megabytes of additional space. Some important images for running .NET Core and
ASP.NET Core applications including the following, are distributed by Microsoft to the Docker Hub
container registry.

Image Operating system Description

microsoft/dotnet:runtime Linux (Debian Stretch) .NET Core runtime files can be used
Windows Nano Server for launching (but not building)
.NET Core applications.

microsoft/dotnet:sdk Linux (Debian Stretch) .NET Core runtime and SDK


Windows Nano Server installation (build tools) can be
used for building and launching
.NET Core applications.

microsoft/dotnet:runtime-deps Linux (Debian Stretch) Only the Linux libraries required for
running self-contained .NET Core
applications (does not include the
.NET Core runtime or the SDK).

microsoft/aspnetcore:2 Linux (Debian Stretch) ASP.NET Core runtime files can be


Windows Nano Server used for launching (but not
building) ASP.NET Core
applications.
MCT USE ONLY. STUDENT USE PROHIBITED
5-22 Hosting Services

Image Operating system Description

microsoft/aspnetcore-build:2 Linux (Debian Stretch) ASP.NET Core runtime and SDK


Windows Nano Server installation (build tools) can be
used for building and launching
ASP.NET Core applications.

Note: Most of the container images in the above list have the exact same names across
different operating systems. For example, when you use Docker on Windows with Windows
Containers and pull the microsoft/dotnet:runtime tag, you will get a Windows Nano Server
container; but if you use Docker on Windows with Linux Containers (or Docker on Linux or
macOS), you will get a Linux container running Debian.

If you plan to use container images built by others, such as Microsoft’s ASP.NET Core container images,
you will only need to copy in your application’s files and configure environment variables, volumes, and
networking ports. However, if you plan to build your own container images; for example, to have more
precise control over software installed in the container, you will need to write a Dockerfile.
A Dockerfile is a simple text specification containing instructions for building a container image. The new
image is always based on an existing image (even when using the empty scratch image), and can
customize that image with additional software installations, environment variables, files, and arbitrary
commands.

The following Dockerfile is based on the microsoft/aspnetcore:2 image. It adds the application’s binary
files from the host’s current directory to the /app directory in the container, and specifies that the
command for launching the container is dotnet:

Basic Dockerfile definition for ASP.NET Core application


FROM microsoft/aspnetcore:2
WORKDIR /app
COPY . .
ENTRYPOINT ["dotnet", "BlueYonder.Flights.Service.dll"]

For a full reference of the Dockerfile language, visit the following link.
https://aka.ms/moc-20487D-m5-pg18

To build a Dockerfile, you use the docker build command, which produces a new container image that
you can then run or push to a container registry. The docker build command sends the build context to
the Docker engine along with the Dockerfile; by default, the build context includes all the files under the
current directory. This is important when the Dockerfile references some of the files in the build context.
For example, to copy application binaries or configuration files into the resulting container image using
the ADD or COPY commands.

Note: You can use a .dockerignore file to specify which files should not be sent as part of
the build context. This is similar to the .gitignore file used by the Git source control system.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-23

Use the following commands to build a new container image with a local tag from the Dockerfile in the
current directory (.), tag it with a specific user tag, and push it to Docker Hub:

Build, tag, and push docker image commands


docker build -t flights:v1 .
docker tag flights:v1 blueyonder/flights:v1
docker push blueyonder/flights:v1

Note: The preceding code snippet used a fairly common format for image tags, where the
tag follows a version number. The version number is completely up to you, and it is not
consumed or used automatically by Docker’s tools.

Demonstration: Creating an empty ASP.NET Core Docker container


In this demonstration, you will learn how to create a docker container with ASP.NET Core application.

Demonstration Steps
You will find the steps in the “Demonstration: Creating an empty ASP.NET Core Docker container“ section
on the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/blob/master/Instructions/20487D_MOD05_DEMO.md.

Visual Studio Tools for Docker and Docker Compose


In Visual Studio 2017, Microsoft introduced a suite
of tools for adding Docker support to a project
and manipulating Docker container images from
the familiar Visual Studio environment. These
tools are collectively known as the Visual Studio
Tools for Docker. Note that installing these tools
does not also install Docker on your machine: you
still need to install Docker from the official source
before you can use these tools.

To install Docker on your Windows machine alongside Visual Studio 2017, refer to the
following link:

https://aka.ms/moc-20487D-m5-pg19
When you create a new ASP.NET Core project with Docker support enabled, or when you add Docker
support to an existing project, Visual Studio creates a number of files for you:

• Dockerfile. Contains instructions for creating a container image that hosts your web application,
based on Microsoft’s official Docker image.
• Docker-compose.yml. A Docker Compose manifest file (see below) that helps bring your application
up along with any dependencies, network port mappings, volumes, and environment variables.
MCT USE ONLY. STUDENT USE PROHIBITED
5-24 Hosting Services

Additionally, Visual Studio 2017 configures the build process such that when you build your project, the
Docker client is invoked to build your Docker container image; and when you launch your project, the
Docker client is invoked to launch that container and attach a debugger to it.

The following screenshot illustrates the Visual Studio New Project wizard with the Docker support check
box.

FIGURE 5.5: VISUAL STUDIO NEW PROJECT WIZARD


Docker Compose is a tool for defining multi-container applications, which helps build, run, and deploy
multiple linked containers at once. Docker Compose is based on a YAML-formatted file, typically named
docker-compose.yml, which contains instructions for composing your multi-container application from
individual container images. For example, a single manifest file can describe a web application comprised
of an ASP.NET Core Web API container, a Redis Cache container, a SQL Server database container, and an
Nginx reverse proxy container. By using Docker Compose, you can easily orchestrate building and
deploying this application. Just like regular Docker builds, Docker Compose will automatically detect
which containers have changed and will rebuild only these containers, leading to more rapid iteration
times.

The following docker-compose.yml file describes an ASP.NET Core web application container and a
linked Redis Cache container, which will be brought up or down as a single logical unit.

Docker compose definition in YAML format


version: '3'
services:
web:
build: .
ports:
- "5000:5000"
redis:
image: "redis:alpine"

In the preceding example, the Redis Cache container will be deployed directly from the upstream
redis:alpine image, which will be pulled from Docker Hub. The web application container will be built
from the current directory, and have its port 5000 forwarded to port 5000 on the host.

Use the following link to refer to the docker-compose.yml file format.


https://aka.ms/moc-20487D-m5-pg20
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-25

Packaging an ASP.NET Core application in a Docker container


After understanding how Docker works in general,
and what the Visual Studio Tools for Docker bring
to the table, you will now explore the full
Dockerfile file created by Visual Studio Tools for
Docker, and learn how you can use it to build,
debug, and publish your ASP.NET Core
application.

This is the complete Dockerfile created by Visual


Studio Tools for Docker when generating a new
ASP.NET Core Web API project, with all the default
template settings:

Dockerfile generated by Visual Studio


FROM microsoft/aspnetcore:2.0-nanoserver-1709 AS base
WORKDIR /app
EXPOSE 80

FROM microsoft/aspnetcore-build:2.0-nanoserver-1709 AS build


WORKDIR /src
COPY HelloWebApp.sln ./
COPY HelloWebApp/HelloWebApp.csproj HelloWebApp/
RUN dotnet restore -nowarn:msb3202,nu1503
COPY . .
WORKDIR /src/HelloWebApp
RUN dotnet build -c Release -o /app

FROM build AS publish


RUN dotnet publish -c Release -o /app

FROM base AS final


WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "HelloWebApp.dll"]

The preceding Dockerfile consists of multiple sections, which specify instructions for a Docker multi-step
build. Multiple intermediate containers will be generated, but only the last FROM section specifies which
container image is the result of the build:

• FROM … AS base. This section declares the first build step, which is an exact clone of the
microsoft/aspnetcore:2.0-nanoserver-1709 container image, but it adds the /app directory and
exposes port 80.

• FROM … AS build. This section declares another build step, in which you use a different container
image as base: the microsoft/aspnetcore-build:2.0-nanoserver-1709 container image. This image
is designed for building ASP.NET Core web applications, and not just hosting them, so it contains the
compiler, build tools, and everything else required for building applications. The subsequent
instructions specify that a /src directory should be created, the solution and project files copied in,
and then the dotnet restore step runs to restore NuGet packages. Finally, the application source files
are copied in and the dotnet build step runs to build the application and copy the results to the /app
directory.

• FROM build AS publish. This section is based on the previous step and runs the dotnet publish
command, which finalizes the application for deployment and copies the resulting files to the /app
directory. Note that this container is still based on the aspnetcore-build image, which contains the
ASP.NET Core build tools.
MCT USE ONLY. STUDENT USE PROHIBITED
5-26 Hosting Services

• FROM base AS final. This section is based on the base image (the first build step), which does not
contain build tools and is designed for running a packaged application. The COPY command copies
the /app directory contents from the publish image, and then declares that the entry point for the
ASP.NET Core application is dotnet HelloWebApp.dll. Note that the dotnet command is part of the
aspnetcore image.

When you build the project in Visual Studio 2017 or launch it for debugging, Visual Studio 2017 launches
the Docker client and builds the container image. You can see the build steps in the Output window, as in
the following screenshot.
Visual Studio Output window, showing the build steps of the Docker container.

FIGURE 5.6: VISUAL STUDIO OUTPUT WINDOW


You can use the Visual Studio Publish wizard to publish your Docker container image to Docker Hub, or
any other container registry. The Publish wizard helps authenticate to your Docker Hub account, or, if you
prefer, helps connect to an existing Azure Container Registry (or create a new Azure Container Registry if
necessary).

The following screenshot shows the Visual Studio Pick a publish target dialog box for publishing into a
container registry.

FIGURE 5.7: VISUAL STUDIO PUBLISH DIALOG BOX


MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-27

Demonstration: Publishing into a Container


In this demonstration, you will learn how to package ASP.NET Core application as Docker container and
push the container to Docker Hub.

Demonstration Steps
You will find the steps in the “Demonstration: Publishing into a Container“ section on the following page:
https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD05_DEMO.md.

Publishing ASP.NET Core services into Azure Container Instances


Although creating containers locally seems easy,
running containerized services in production
requires a lot of additional work. Some of the
aspects you will need to consider include:

• Provisioning hosts to run your container


instances
• Scheduling container instances to hosts

• Determining which resources to assign a


container and managing these resources
• Scaling containers automatically based on
load
• Sharing data between containers in a safe and isolated manner

• Managing container data at rest, when there is no container accessing it

• Discovering containerized services so they can talk to each other

• Injecting secrets and configuration values into containers


• Monitoring container performance, errors, and logs

Azure provides multiple container-related services that can run containerized workloads at scale. These
include the following:

• Azure Container Service (ACS). Helps run a cluster of virtual machines hosting containers and
container orchestrator nodes. ACS supports Datacenter Operating System (DC/OS), Kubernetes, and
Docker Swarm as orchestrators, but requires quite a bit of manual management for provisioning,
updating, and maintaining the cluster. Azure Kubernetes Service (AKS) provides a managed
Kubernetes cluster, where the management and worker nodes are managed by the Azure platform.
You only need to specify the number of worker nodes you want, and everything else—from
provisioning to upgrades—is managed by the platform. Even though ACS and AKS dramatically
simplify the process of productionizing your container-based application, they require some
background in container orchestration concepts, such as Kubernetes pods, services, and replicas (for
Kubernetes clusters).

• Azure Container Registry. Helps securely store versioned container images in an Azure-hosted
registry, and makes them accessible to your other Azure services. You can publish Docker container
images to Azure Container Registry from Visual Studio 2017, or from your build pipeline in Visual
Studio Team Services and other tools.
MCT USE ONLY. STUDENT USE PROHIBITED
5-28 Hosting Services

• Azure Batch. Helps run large-scale batch jobs on Azure’s compute infrastructure. Although Azure
Batch supports non-container workloads as well, it now has first-class support for containers, so you
can package your batch job in a container image and ship it to Azure Batch for massively parallel
execution.

• Azure App Service. Helps run web applications without worrying about the underlying infrastructure.
In previous modules, you learned to deploy web applications directly to App Service without using
containers. However, App Service supports containerized applications, so that instead of deploying
code or Web Deploy packages, you can publish a container image to App Service and have it hosted
on the App Service environment.
• Azure Container Instances. The newest container-related offering of the bunch, helps create and run
container instances without worrying about cluster orchestration, management nodes, and other
concerns that arise from coordinating thousands of container instances. Azure Container Instances is
suitable for simple container-based workloads, where you want to rapidly deploy a handful of
container images and make them accessible with a public IP address.

To run an ASP.NET Core web application in Azure Container Instances, you need to publish your
application’s container image to a container registry, such as Azure Container Registry. You can use the
Visual Studio Publish wizard to create a new Azure Container Registry and then publish the container
image to that registry; or, you can use the Azure portal to create the Azure Container Registry first. Then,
you can use the Azure CLI or the Azure portal to create a new container instance using the published
container image.
The following screenshot shows how to create a new container instance on Azure Container Instances in
the Azure portal.

FIGURE 5.8: THE BASICS TAB OF THE CREATE AZURE CONTAINER


INSTANCES WINDOW IN THE AZURE PORTAL
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-29

The second step of the wizard, where the operating system platform, number of CPUs, memory
requirements, and public IP address and port settings are configured.

FIGURE 5.9: THE CONFIGURATION TAB OF THE CREATE AZURE


CONTAINER INSTANCES WINDOW IN THE AZURE PORTAL
At the end of the deployment process, your container instance can be accessed using its public IP address
and port, per your selection in the wizard. There is no need to worry about management nodes, clusters,
orchestration, versioning, and many other concerns that come with the more advanced container hosting
offerings, such as ACS.
MCT USE ONLY. STUDENT USE PROHIBITED
5-30 Hosting Services

Lab C: Host an ASP.NET Core service in Azure Container


Instances
Scenario
In this lab, you will host the hotel booking service previously implemented in Azure Container Instances.
To simplify the lab, you will use an in-memory database with Entity Framework Core.

Objectives
After completing this lab, you will be able to:

• Package and publish an ASP.NET Core service to a Docker container

• Host an ASP.NET Core service in Azure Container Instance

Lab Setup
Estimated Time: 30 minutes

You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD05_LAB_MANUAL.md.

You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD05_LAK.md.

Exercise 1: Publishing the Service to a Docker Container


Scenario
In this exercise, you will package an ASP.NET Core service as Docker image and publish it to Docker Hub.

Exercise 2: Hosting the Service in Azure Container Instances


Scenario
In this exercise, you will deploy an ASP.NET Core service to Azure Container Instance.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-31

Lesson 4
Implementing serverless services
The complexities of managing modern infrastructure are the result of numerous revolutions in the IT
industry. To a large extent, the popularity of cloud services like Microsoft Azure is owed to the difficulties
of deploying and managing a fleet of physical machines and scaling these to meet modern workloads.
Similarly, the shift to containerized services, described in the previous lesson, is the result of a long-term
trend for minimizing the footprint of a packaged service and increasing the deployment density. In a way,
there is a transition from deploying and scaling machines to deploying and scaling individual components
like micro-services. The next logical step is to deploy and scale functions.

This lesson covers Azure Functions, Microsoft’s hosted serverless computing offering, which allows you to
deploy functions at cloud scale. You will use Visual Studio 2017 to develop and test functions locally, and
subsequently deploy them to Azure Functions and configure various triggers.

Lesson Objectives
After completing this lesson, you will be able to:

• Explain how serverless computing abstracts away server deployment and management.
• Describe the types of problems that benefit from serverless computing.

• Explain how to develop and test Azure Functions in Visual Studio.

• Describe how to deploy Azure Functions to Azure and monitor their execution.
• Describe how to use HTTP and other triggers to invoke Azure Functions.

Serverless services and Azure Functions


Serverless computing does away with managing
hosts and services, and shifts to representing your
application as a collection of functions, triggered
by various external events. Also known as
Function as a Service (FaaS), serverless
applications are isolated blocks of relatively short-
running business logic, which interact with
managed stateful services provided by the
serverless runtime, such as databases, queues, and
files. Azure’s approach to serverless computing is
Azure Functions (hosted in Azure App Service),
which can be developed and tested locally and
then shipped at Azure scale, and can be triggered by various interesting events.

Note: Clearly, “serverless” computing still requires servers to run functions. However,
because you do not manage these servers, and they are provisioned transparently to meet your
scale demands, it is as if they do not exist from your perspective; hence, “serverless.”

When creating a serverless application, you decompose your application logic into independent functions,
which can be invoked, scheduled, and monitored. With typical FaaS runtimes, Azure Functions included,
MCT USE ONLY. STUDENT USE PROHIBITED
5-32 Hosting Services

you only pay for actual execution time (often measured in CPU seconds), which means it is beneficial for
you to decompose your application’s logic into the smallest possible building blocks.

If function A is only invoked once per request, but function B is invoked 100 times, you don’t pay for the
compute and memory resources required for function A while function B is executing. This decomposition
is not only financially advantageous; it often helps to logically break your application into independent
parts, which are then easier to debug, update, and monitor in production.
You can write Azure Functions in a variety of programming languages, including C#, F#, and JavaScript. In
Azure Functions 2.0, there is experimental support for additional languages, including Java, Python, PHP,
PowerShell, and others. As a small unit of code, it is very likely that your function will require integrations
with external data sources and triggers, and the Azure Functions runtime provides a rich set of
integrations that includes Azure Cosmos DB, Azure Storage, Azure Service Bus, and more. For example, a
typical Azure Function might be invoked by a new message posted to a Service Bus queue, and it would
then process the posted data and write a new record into a Cosmos DB table. Another function might be
invoked by a new entry created in an Azure Storage table, and will respond by sending an HTTP request
to an external service.

Benefits and challenges of serverless technologies


Serverless computing has a number of important
advantages in the modern, ever-changing, agile
software development landscape:

• Rapid development and time to market. It is


much faster to develop a function and deploy
it to a serverless runtime than to develop a
full-blown application and package it as a
container image or a virtual machine image.
• Infinite scaling. Because functions are often
stateless and do not interact with global in-
memory resources, they can be designed to
infinitely scale. Furthermore, scaling requires
no effort on your behalf. It can be performed automatically by the serverless runtime, based on the
number of requests your application is handling.

• Pay only for what you use. With very accurate sub-second billing, you pay only for the periods of time
when your function is actually running and processing work. You don’t pay for servers or container
instances that wait idly for new work to arrive. One of the reasons sub-second billing is possible is that
functions are very fast to launch, so you can finish processing a new request in just a few seconds and
only get billed for these few seconds of processing time.

• Simplified software model. Serverless applications fit well with the micro-services model, where the
application is decomposed into the smallest independent building blocks. With serverless computing,
these building blocks are functions.
At the same time, serverless computing has some distinct disadvantages. It is not a silver bullet that fits
every application. In many cases, a hybrid architecture where parts of the system are delivered as
functions and other parts as more traditional software components is more appropriate. Some of the
problems with serverless computing include:

• Long-running applications. For long-running batch jobs, it may be more cost-effective to run the job
on a dedicated machine than to use a serverless runtime and pricing model.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-33

• Vendor lock-in. Unless you carefully use a FaaS abstraction, such as OpenFaaS, your serverless
architecture may be tied to a specific vendor, like Microsoft Azure or Google Cloud Platform.

Migrating to another vendor might require significant changes in your application logic and
deployment pipeline.
• Cold start. Because each function invocation is completely stand-alone, it might take extra time for
your function to handle a request because of JIT compilation or other startup costs, especially with
languages that were not designed for super-fast startup times, such as Java.

• Difficulty of local development. With some serverless runtimes, developing and testing the function
locally is difficult or even impossible. As a result, you need to develop test frameworks and simulator
tools for development purposes. (Fortunately, Azure Functions offers first-class local development and
debugging support in Visual Studio 2017, and even remote debugging for functions executing in
Azure.)
As with any new technology, you need to consider the benefits and challenges of serverless computing
and see if it is a good fit for your application. Serverless computing is often a great fit for modern
distributed architectures that consist of a large number of small, independent web services, which interact
with other cloud-hosted resources, such as databases and queues.

Implementing an HTTP-triggered Azure Function


You can create Azure Functions using the Azure
portal. Although it is not the recommended way
to maintain your serverless application, it is very
easy to get started, and you can write your
function’s code directly in the web browser when
using the Azure portal. An Azure Function App
(which may contain one or more functions, as
needed) can be scaled and billed in one of the
following two approaches:
• Consumption plan. Resources are allocated to
the function as needed to meet demand. You
pay for each function execution, and you
don’t pay anything when the function is not running.

• App Service plan. Resources are allocated statically, as with Azure Web Apps and other App Service
resources. You configure the number of instances and the size of the instances running your
functions, and pay the same regardless of how many function invocations are actually executed.
MCT USE ONLY. STUDENT USE PROHIBITED
5-34 Hosting Services

The following screenshot shows configuration settings in the Azure portal for creating a new Azure
Function App. You can configure the resource group, hosting plan (consumption or App Service), location,
and other parameters before creating the Function App.

FIGURE 5.10: THE FUNCTION APP WINDOW IN THE AZURE PORTAL


When creating a new Azure Function, you need to pick the language (C#, F#, JavaScript, etc.) and the
trigger that the Azure Functions runtime will use to invoke your function. For now, use the HTTP trigger,
which associates your function with a URL that will invoke your function for GET, POST, or other HTTP
requests. The next topic explores other types of triggers that can invoke your function.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-35

The configuration dialog box for creating a new function within an Azure Function App. You can select
the trigger type that invokes the new function. Use “HTTP trigger” for now.

FIGURE 5.11: TRIGGER TYPE SELECTION TO INVOKE THE NEW FUNCTION


The Azure Functions programming model for C# is based on simple C# files, with a static function that
represents your serverless computing entry point. Your function integrates with external data sources,
such as Cosmos DB, Storage queues, and HTTP requests that use dedicated classes in various Azure NuGet
packages, such as Microsoft.Azure.WebJobs.Extensions.Http for HTTP triggers. At the same time, you
can use additional NuGet packages by declaring them in your function’s configuration. (Similarly, you can
use additional npm packages with functions written in JavaScript.)

The function accepts an HTTP request, retrieves the source and destination query parameters, and
returns a simple text HTTP response.

HTTP triggered Azure Function implementation


using System.Net;

public static Task<HttpResponseMessage> Run(


HttpRequestMessage req, TraceWriter log)
{
log.Info("Flight reservation API invoked");

string source = req.GetQueryNameValuePairs().FirstOrDefault(


q => q.Key == "source").Value;
string destination = req.GetQueryNameValuePairs().FirstOrDefault(
q => q.Key == "destination").Value;

return req.CreateResponse(HttpStatusCode.OK,
$"Booked from {source} to {destination}");
}

The preceding code example shows how a function can respond to external HTTP requests. It can now be
accessed from a browser by using its associated URL, which brings about the issue of authentication.
HTTP-triggered functions support three forms of authentication to control who can access the function:

• No authentication. Anyone can invoke the function, given its URL.

• Function key. A per-function API key needs to be attached to each incoming request.

• Host key. A global admin API key needs to be attached to each incoming request.
MCT USE ONLY. STUDENT USE PROHIBITED
5-36 Hosting Services

• User authentication. You can configure login with various identity providers (such as Facebook or
Google), or Azure Active Directory.

To learn more about the Azure Functions C# programming model, refer to the following link:
https://aka.ms/moc-20487D-m5-pg21

To learn about configuring user authentication with Azure App Service, refer to the following
link:
https://aka.ms/moc-20487D-m5-pg22

You can also use the Azure portal to monitor your function’s execution, read its log output, navigate its
local file system, and perform additional management and diagnostic tasks, which are outside the scope
of this module. A very useful feature is that you can test your function in the browser, without having to
actually force its trigger condition.
The following screenshot shows the Test pane in the Azure Functions portal, which helps test an HTTP-
triggered function without worrying about authentication and properly formatting the parameters.

FIGURE 5.12: THE TEST PANE IN THE AZURE FUNCTIONS PORTAL

For the settings in the Azure Functions management portal, refer to the following link:
https://aka.ms/moc-20487D-m5-pg23

Demonstration: HTTP-triggered Azure Function


In this demonstration, you will learn how to create Azure Function App and create a new function that
triggered by http and return custom message.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-37

Demonstration Steps
You will find the steps in the “Demonstration: HTTP-triggered Azure Function“ section on the following
page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD05_DEMO.md.

Other triggers for Azure Functions


In addition to the HTTP trigger for Azure
Functions, which is similar to implementing a very
simple ASP.NET Core Web API controller, other
triggers are available, which help your function
respond to interesting events. The full list of
triggers is constantly expanding. The following is a
partial list:

• HTTP trigger (webhook). As discussed above,


this trigger will invoke your function when it
receives an HTTP request at the function’s
URL.

• Timer. This trigger will invoke your function at


specified time intervals.
• Queue storage. This trigger will invoke your function when a new message arrives in a Storage queue.

• Service Bus queue. This trigger will invoke your function when a new message arrives in a Service Bus
queue.

• Cosmos DB. This trigger will invoke your function when insertions or updates occur in a partition that
you’re interested in monitoring.
• Event Hub. This trigger will invoke your function when a new event is inserted into the event hub on
which you’re listening.

In addition to triggers, bindings help Azure Functions connect to data that is stored in remote services and
output data to remote services. For example, by using the Table Storage binding, your function can
automatically read an entity from a Table Storage, and by using the Sendgrid binding, your function can
automatically send an outgoing email message.

When you create your function in the Azure portal and configure its triggers and bindings, the
configuration is stored in a file named function.json. You can inspect this file to review and update the
triggers and bindings if necessary.
MCT USE ONLY. STUDENT USE PROHIBITED
5-38 Hosting Services

The following screenshot shows the configuration dialog box for creating a new function triggered by a
new message inserted to a Queue Storage.

FIGURE 5.13: NEW FUNCTION CREATION DIALOG BOX


The following screenshot shows the triggers and bindings configuration screen in the Azure portal, which
modifies the function.json file mentioned above.

FIGURE 5.14: THE TRIGGERS AND BINDINGS CONFIGURATION SCREEN


MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-39

The following code example shows the function.json file for a function configured with a Storage queue
trigger, a Storage queue message input binding, and a Table Storage output binding.

Azure Function binding configuration


{
"bindings": [
{
"name": "reservationId",
"type": "queueTrigger",
"direction": "in",
"queueName": "hotel-reservations",
"connection": "AzureWebJobsDashboard"
},
{
"type": "table",
"name": "paymentDetails",
"tableName": "paymentdetails",
"partitionKey": "Reservations",
"rowKey": "{reservationId}",
"take": 1,
"connection": "AzureWebJobsDashboard",
"direction": "in"
},
{
"type": "table",
"name": "$return",
"tableName": "bookedreservations",
"connection": "AzureWebJobsDashboard",
"direction": "out"
}
],
"disabled": false
}

In the previous example, the function is triggered by a new Storage queue message in a queue named
hotel-reservations. The function parameter corresponding to that message is called reservationId.
Additionally, the function has an input trigger referencing an entity in a Storage table named
paymentdetails, which is assigned to a parameter named paymentDetails. The {reservationId}
reference indicates how the table entity should be retrieved based on the queue message contents.
Finally, the function returns an entity, which is then written into a Table Storage named
bookedreservations.

When you configure bindings in the Azure portal by using the function.json file, you can access them as
function parameters (for input bindings) or return values (for output bindings) of your function. The
parameter and return value types depend on the type of the binding. For example, for the Table Storage
nput and output bindings, you only need a class that has RowKey and PartitionKey properties
corresponding to the table keys.

Corresponds to the function.json trigger and binding definition shown in the following example.

Queue triggered Azure Function implementation


public class Payment
{
public string RowKey { get; set; }
public string PartitionKey { get; set; }
public string PaymentToken { get; set; }
}

public class Reservation


{
public string RowKey { get; set; }
MCT USE ONLY. STUDENT USE PROHIBITED
5-40 Hosting Services

public string PartitionKey { get; set; }


public string HotelId { get; set; }
public string Confirmation { get; set; }
}

public static Reservation Run(string reservationId,


Payment paymentDetails,
TraceWriter log)
{
log.Info($"Processing reservation {reservationId}");
return new Reservation(...);
}

For a full list of triggers supported by Azure Functions and how they can be integrated in
your function using bindings, refer to: https://aka.ms/moc-20487D-m5-pg24

Developing Azure Functions in Visual Studio


Although the web-based editor in the Azure
portal is quite convenient and offers rapid
iteration times, you will probably want to develop
bigger functions by using a more familiar
development environment. Azure Functions tools
are integrated into Visual Studio 2017, so that you
can create a new Azure Functions project, develop
and test functions locally, and then publish them
to Azure App Service (as a Function App); all from
the convenience of Visual Studio 2017.

When you create a new Azure Function App


project in Visual Studio 2017 and add a new
function to it, you can select the type of trigger as you would in the Azure portal. Based on your
selections, Visual Studio 2017 generates a function body that you can then implement. A single project
can contain multiple functions, which you can add later.

A screenshot of the Visual Studio new project wizard when creating an Azure Functions project.

FIGURE 5.15: VISUAL STUDIO NEW PROJECT WIZARD FOR CREATING AN


AZURE FUNCTIONS PROJECT
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-41

For your convenience, when developing the function locally, instead of using the function.json file, you
can use C# attributes to specify triggers and bindings for your function. To name a few examples, you can
use the [QueueTrigger] attribute to specify that the runtime should invoke your function when a new
message is inserted to a Storage queue; you can use the [FunctionName] attribute to specify your
function’s name; and you can use the [Blob] attribute to specify that the value stored in an output
parameter of your function should be written to an Azure Storage Blob.

For using trigger and bindings attributes in an Azure Functions project, refer to the following
link.
https://aka.ms/moc-20487D-m5-pg25

C# function that will be invoked based on an HTTP trigger. It is a trimmed version of the code generated
by the Visual Studio wizard.

HTTP triggered function generated with Visual Studio


public static class BookingFunction
{
[FunctionName("BookHotel")]
public static IActionResult Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get")]
HttpRequest req,
TraceWriter log)
{
log.Info("Processing a booking request...");

string hotel = req.Query["hotel"];


int nights = int.Parse(req.Query["nights"]);

string confirmation = Guid.NewGuid().ToString();

return new OkObjectResult($"Booked {hotel} for {nights} nights, confirmation


{confirmation}");
}
}

When you launch the Azure Functions project from Visual Studio 2017 (under the debugger or directly), it
is not deployed to the cloud, but rather instantiated locally in the Azure Functions console host. For HTTP-
triggered functions, you can then issue HTTP requests locally; for other types of triggers, there are
different approaches that can be used to simulate function inputs and outputs.

The following screenshot shows the Azure Functions console host processing HTTP requests locally.
MCT USE ONLY. STUDENT USE PROHIBITED
5-42 Hosting Services

FIGURE 5.16: AZURE FUNCTIONS CONSOLE HOST PROCESSING HTTP


REQUESTS LOCALLY

For the Azure Functions local development workflow using Visual Studio 2017, refer to the
following link: https://aka.ms/moc-20487D-m5-pg26
When your Function App is ready for deployment, you can use the Visual Studio Publish wizard to create
a new Azure Function App or deploy into an existing one. You should not mix and match Visual Studio-
generated functions and manually authored functions in the Azure portal within the same Function App.

The following screenshot Shows the publishing progress for a Visual Studio Azure Functions project into
an Azure App Service Function App:

FIGURE 5.17: THE VISUAL STUDIO PUBLISH WIZARD


MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-43

Demonstration: Developing, Testing, and Publishing an Azure Function


from CLI
In this demonstration, you will learn how to develop test and debug Azure Function locally with dotnet cli
and Visual Studio Code.

Demonstration Steps
You will find the steps in the “Demonstration: Developing, Testing, and Publishing an Azure Function from
CLI “ section on the following page: https://github.com/MicrosoftLearning/20487D-Developing-
Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD05_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
5-44 Hosting Services

Lab D: Implementing an Azure Function


Scenario
In this lab, you will implement a simple HTTP service in an Azure Function. The service is a proxy front-end
for the previously implemented hotel booking service, which simplifies bulk hotel booking operations for
a group of people.

Objectives
After completing this lab, you will be able to:

• Develop and test Azure Function application in a local development environment


• Deploy the Azure Function application from the development environment to Azure

Lab Setup
Estimated Time: 20 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD05_LAB_MANUAL.md.

You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD05_LAK.md.

Exercise 1: Developing the Service Locally


Scenario
In this exercise, you will develop and test Azure Function application.

Exercise 2: Deploying the Service to Azure Functions


Scenario
In this exercise, you will deploy the Azure Function application to Azure.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 5-45

Module Review and Takeaways


In this module, you learned about the different ways to host ASP.NET Core services, such as self-hosting
and IIS hosting. You also learned about the Azure Web Apps, Docker container, and Azure Functions as
the options to use for cloud-based hosting.

Review Question
Question: What would you use to host a personal blog site in Azure, and why?
MCT USE ONLY. STUDENT USE PROHIBITED
5-46 Hosting Services
MCT USE ONLY. STUDENT USE PROHIBITED
6-1

Module 6
Deploying and Managing Services
Contents:
Module Overview 6-1

Lesson 1: Web Deployment with Visual Studio 2017 6-2

Lesson 2: Web Deployment on Linux 6-8


Lab A: Deploying an ASP.NET Core Web Service on Linux 6-15

Lesson 3: Continuous Delivery with Visual Studio Team Services 6-16


Lesson 4: Deploying Applications to staging and Production Environments 6-23
Lab B: Deploying to Staging and Production 6-27

Lesson 5: Defining Service Interfaces with API Management 6-28


Lab C: Publishing a Web API with Azure API Management 6-38
Module Review and Takeaways 6-39

Module Overview
You will learn how to deploy services to both on-premises and cloud environments. You will also learn
how to manage the interface and policy for their services.

Objectives
After completing this module, you will be able to:

• Explain Microsoft Internet Information Services (IIS) Web Deploy.


• Explain Azure Web Apps deployment by using a Microsoft Visual Studio Team Services build pipeline.

• Explain how to deploy web services to Azure Container Instances.

• Explain how to define service interfaces by using API Management and Swagger.
• Explain how to define policies by using API Management.
MCT USE ONLY. STUDENT USE PROHIBITED
6-2 Deploying and Managing Services

Lesson 1
Web Deployment with Visual Studio 2017
One of the quickest ways to deploy a web application to a remote server is to deploy it with the Web
Deployment Framework, or Web Deploy. With Web Deploy, you can perform several tasks at one time,
such as copying files to remote servers, configuring IIS application pools, and applying permissions to the
file system. There are many ways to use Web Deploy, but one of the easier ways is by using the publishing
feature of Visual Studio 2017.

In this lesson, you will learn about Web Deploy and how to deploy web applications by using Web Deploy
in Visual Studio 2017.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe the capabilities of Web Deploy.


• Explain how to configure web deployment in Visual Studio 2017.

• Explain how to deploy a web application from Visual Studio 2017.

• Explain how to create a Web Deploy package and perform live deployment with Visual Studio 2017.

Introduction to Web Deploy


The deployment of web applications is complex
because it not only requires compiling the web
application and copying the compilation output
to the target server, but also requires other
actions, such as configuring IIS, modifying the
database connection strings, installing certificates,
updating the database schemas, and more.
In the early days of web development, the tools
supplied by the development platform, such as
XCOPY deployment and File Transfer Protocol
(FTP) deployment, were not enough to achieve
the complex task of deployment. In most cases,
the result produced either a lengthy document describing how to deploy the application manually or a
set of complex scripts that deployed the application automatically and had to be activated with a large set
of parameters.

This is where Web Deploy, which was released in 2009, is most useful. Web Deploy was created to simplify
the deployment of web applications to servers. Web Deploy can perform more than just copying files
between a source and a destination. It can perform additional tasks such as copying the configuration
from one IIS to another, writing to the registry, setting file system permissions, performing transformation
of configuration files, and deploying databases.
Web Deploy is installed with Visual Studio 2017. If you have a computer that does not have Visual Studio
2017 installed on it, and you want to use Web Deploy, you will have to install it manually.

Download Web Deploy


http://go.microsoft.com/fwlink/?LinkID=298820&clcid=0x409
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-3

You can use Web Deploy to publish and synchronize an existing web application on a remote
server. You can also use Web Deploy to create a deployment package from an existing web
application and publish that package to a server later. A deployment package, which is a
standard compressed file, contains both the content that you want to copy to a server and an
instruction file that contains the list of actions to perform on the target server. The instructions, or
providers, as they are referred to in the Web Deploy terminology, control the various resources
that can be created or manipulated in the server, such as files, IIS applications, databases, and
registry. You can also create your own custom Web Deploy provider if you have to perform a task
that is not implemented by any of the existing providers, such as attaching a .VHD file as a local
hard drive.

For a list of available Web Deploy providers, refer to the following link.
Web Deploy Providers
http://go.microsoft.com/fwlink/?LinkID=298821&clcid=0x409

You can use Web Deploy in various ways. For example, when you use Visual Studio 2017 to publish a web
application, you are actually using the Web Deployment Framework for the task. The same is true when
you export an application from IIS Manager, or when you use the MSDeploy command-line tool.
For more information about Web Deploy, refer:
Introduction to Web Deploy
http://go.microsoft.com/fwlink/?LinkID=298822&clcid=0x409

Configuring Web Deployment in Visual Studio 2017


Visual Studio 2017 provides several techniques for
publishing web applications:
• File system and FTP. The file system and FTP
options do not use Web Deploy and provide
a basic process for compiling the web
application and copying the files to the
destination.

• Web Deploy and Web Deploy package. The


Web Deploy and Web Deploy package
options use the Web Deployment Framework
to perform complex deployments. For
example, if you decide to create a Web
Deploy package, you can create a package that includes the web application files and the database
scripts, which run after the application is deployed.

Whichever deployment technique you choose, you can control some basic settings through the properties
of the web application project. If you do not plan to use Web Deploy, you can control only a few settings,
such as deploying files that are in the project folder but are not included in the project. If you plan to use
Web Deploy (either live or by creating a package), you can configure more settings, such as copying local
IIS application pool settings to the deployed server and listing the SQL script files, which will run as part of
the deployment.

To view these settings, right-click your web application project in the Solution Explorer window in Visual
Studio 2017, and then click Publish.
MCT USE ONLY. STUDENT USE PROHIBITED
6-4 Deploying and Managing Services

On the Pick a publish target page, click the IIS, FTP etc tab, and then click Create Profile.

FIGURE 6.1: A SCREENSHOT OF THE PICK A PUBLISH TARGET PAGE


On the Connection tab of the Publish wizard, complete the required fields, validate the connection, and
then click Next.

FIGURE 6.2: A SCREENSHOT OF THE CONNECTION TAB IN THE PUBLISH WIZARD.


MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-5

On the Settings tab of the Publish wizard, ensure that the settings are appropriate, and then click Save.

FIGURE 6.3: A SCREENSHOT OF THE SETTINGS TAB IN THE PUBLISH WIZARD.


After you configure the publish settings, in the Solution Explorer window, right-click the project, and
then click Publish. This displays the Publish Web dialog box by using which you can perform the
following tasks:
• Select the publish technique.

• Provide information about the destination.

• Select which solution configuration you want to publish, such as debug or release.

• Begin the publishing process.

If you select any of the Web Deploy techniques, you can also provide additional settings, such as a new
connection string that will replace the current connection string in the web.config file.
Visual Studio 2017 stores all the publish settings in the project so that the next time you have to publish
the application, you can do a one-click publish instead of supplying all the information again.

Visual Studio 2017 supports storing more than one publishing profile so that you can create profiles for
different scenarios. For example, you can create different profiles for testing and production
environments, each with its own database connection string.
For more information on how to use the Web Deploy dialog box, refer to the following link.
How to: Deploy a Web Project by Using One-Click Publish in Visual Studio 2017.
http://go.microsoft.com/fwlink/?LinkID=298825&clcid=0x409

Note: When you create a Web Deploy package, in addition to the packaged compressed
file, a .cmd file is created, together with a readme.txt file that describes how to run the .cmd file
to deploy the package.
MCT USE ONLY. STUDENT USE PROHIBITED
6-6 Deploying and Managing Services

Demonstration: Deploying a Web Application with Visual Studio


This demonstration shows how to deploy a web application by using Visual Studio 2017.

Demonstration Steps
You will find the steps in the “Demonstration: Deploying a Web Application with Visual Studio“ section on
the following page. https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/blob/master/Instructions/20487D_MOD06_DEMO.md.

Creating a Deployment Package


In the previous topic, you learned how to use Web
Deploy from Visual Studio 2017 for performing a
live deployment and for creating a deployment
package. One of the disadvantages of using Web
Deploy from Visual Studio 2017 is that you must
have the web application project with its source
files. If you have an already compiled web
application that you want to deploy to another
server, you cannot use Visual Studio 2017 to
deploy it.
In Visual Studio 2017, you can use the Web
Deploy Package method to create a web
deployment package that you can install on any computer with IIS and Web Deploy.
The following screenshot shows the Connection tab of the Publish wizard.

FIGURE 6.4: A SCREENSHOT OF THE CONNECTION TAB IN THE PUBLISH WIZARD.


MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-7

To install a package, copy the package to the host machine, and then run the following command at the
command prompt with the administrator privilege.

Installing a Web Deploy package


WebApplication.deploy.cmd /Y
MCT USE ONLY. STUDENT USE PROHIBITED
6-8 Deploying and Managing Services

Lesson 2
Web Deployment on Linux
When Microsoft released the open source, cross-platform implementation of ASP.NET Core in late 2014,
one of the key strategic shifts was the newly-gained Linux support. For many organizations, the ability to
run Microsoft .NET applications (and especially ASP.NET web services) on Linux is a great consolidation
opportunity towards a single production operating system. The ability to use familiar development tools
on Windows and then host the resulting application on Linux as IT requirements dictate is important for
reducing costs.

This lesson covers the various options for publishing ASP.NET Core applications It explores how ASP.NET
Core applications are deployed to Linux hosts and use Docker containers to run a reverse proxy server in
front of the ASP.NET Core host.

Lesson Objectives
After completing this lesson, you will be able to:
• Explain how to publish ASP.NET Core applications that run on Linux.

• Describe reverse proxies and how they integrate with ASP.NET Core.
• Explain how to configure Nginx for ASP.NET Core.
• Explain how to deploy an ASP.NET Core web service with Nginx in Docker containers.

Publishing an ASP.NET Core app for Linux


During the development and debugging process,
you build your application without optimizations
enabled (in the Debug mode) and with the
support of a .NET Core runtime and the ASP.NET
Core libraries installed on your development
machine. Typically, you use Windows as your
development operating system because it
supports all the features of Visual Studio 2017 to
develop ASP.NET Core applications and IIS to host
the resulting applications. Your development
environment includes Windows, Visual Studio
2017, .NET Core, and other libraries and tools that
you use for development, diagnostics, and testing.

The open-source .NET Core is cross-platform and supports multiple operating systems as production
targets, including various versions of Windows (Windows Server, Windows Nano Server) and numerous
Linux distributions (Ubuntu, Debian, Red Hat Enterprise Linux, Alpine, and others). .NET Core also supports
numerous processor architectures, including Intel x86-64 (used by most servers today) and ARM (used by
IoT devices and mobile phones). Choosing the right operating system for all your organization’s
applications and services can be a major cost-saving factor, especially if you can consolidate your
production environments to a single operating system and processor architecture.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-9

Some of the reasons for choosing Linux as your operating system include:

• Much lower on-disk and memory footprint in bare metal, virtualized, and containerized deployments
(in some cases by more than a factor of 10).
• First-class support for other platforms and programming languages, such as Java, Python, PHP, and
Go. Windows might not support some of these languages.

• Familiar environment for many system administrators.

When you choose Linux as your production platform, you can use the end-to-end build, publish, and
deploy workflow from your Visual Studio 2017 development environment.
When you prepare an ASP.NET Core application for production deployment, you build it with
optimizations enabled (in the Release mode), and you might choose to use a self-contained or
framework-dependent publishing mode. This choice affects the size of your final build and the size of
your container image and the image parts that can be shared with other images, if you’re deploying the
application in a container.

Framework-dependent publishing
When you use framework-dependent publishing, the final build output of your application contains the
application’s dynamic link libraries (DLLs) and all of its third-party dependencies, such as NuGet packages
and project references. However, .NET Core libraries and the .NET Core runtime, which includesjust-in-
time (JIT)compiler, garbage collector, and the dotnet tool, are not packaged with the application. The
shared installation of these components must be present on the target machine.

When you use containers for deployment, Microsoft provides official images on Docker Hub, which
contain the prerequisites for running a framework-dependent application. These are tagged with various
runtime tags. For example, microsoft/dotnet:2.1-runtime is a container image that contains the .NET
Core shared libraries and the components for running .NET Core 2.1 applications. There are also similar
versions of these images optimized for running ASP.NET Core services.

You can use the following commands to restore NuGet packages, compile your application, publish it as a
framework-dependent package, and then run it from the output directory.

Commands for developing an ASP.NET Core application


dotnet restore
dotnet build -c Release
dotnet publish -c Release -o out
dotnet out/myapp.dll

Although you need to install .NET Core libraries and runtime components on the target machine,
framework-dependent publishing has a major advantage. The produced package is completely platform-
independent, and can be run without modification on any platform that supports .NET Core—regardless
of the operating system or processor architecture.

Self-contained publishing
When using self-contained publishing, the final build package of your application contains your
application’s DLLs, third-party dependencies, a complete copy of the .NET Core managed libraries, and
native components, such as the JIT compiler and the garbage collector. Because some of these
components are platform-dependent, when using self-contained publishing, you need to specify the
runtime identifier of a specific platform. The resulting package will run only on a specific operating system
and a processor architecture specified by the runtime identifier.
MCT USE ONLY. STUDENT USE PROHIBITED
6-10 Deploying and Managing Services

Below are some examples of runtime identifiers.

• linux-x64. This runtime identifier targets any Linux distribution for x86-64 processors, with the
exception of Alpine Linux. Examples of supported Linux distributions include Debian, Ubuntu Linux,
Red Hat Enterprise Linux, and Fedora.

• alpine.3.6-x64. This runtime identifier targets the Alpine Linux distribution for x86-64 processors.
Alpine Linux is a lightweight Linux distribution, which works well in container environments because
of its small size. For example, a “Hello, World” .NET Core application container image on top of Alpine
Linux can be as small as 54 MB in size.

• win-x64. This runtime identifier targets any version of Windows for x86-64 processors, including
Windows Server 2008 R2, Windows Server 2016, and others.win10-arm64. This runtime identifier
targets Windows 10 or Windows Server 2016 versions running on ARM 64-bit processors.

For a full list of runtime identifiers, refer to the .NET Core RID catalog.
https://docs.microsoft.com/en-us/dotnet/core/rid-catalog

You can run the following commands to restore NuGet packages, compile your application, publish it as a
self-contained application for Linux x86-64, and then run it from the output directory.

Publishing an ASP.NET Core commands


dotnet restore
dotnet build -c Release
dotnet publish -c Release -r linux-x64 -o out
out/myapp

Note: Unlike framework-dependent publishing, running a self-contained .NET Core


application does not require the dotnet command-line tool. In fact, in the preceding example,
the out/myapp executable produced by the self-contained publishing process is the dotnet
command-line tool, which launches the application’s main library (in this case, out/myapp.dll).

When you use containers for deployment, Microsoft provides official images on Docker Hub, which
contain the basic native dependencies required by a self-contained .NET Core application. These images
are tagged with various runtime-deps tags, that are much smaller in size than the corresponding
runtime tags. For example, microsoft/dotnet:2.1-runtime-deps-alpine is a container image that
contains only the Alpine Linux base image and the native dependencies, such as libzlib and libcurl,
required by .NET Core on Alpine Linux. There are also similar versions of these images optimized for
running ASP.NET Core services as opposed to, for example, .NET Core console applications.

For reference, these are the sample sizes of a “Hello, World” .NET Core 2.1 application packaged into a
container image.

Target platform Tags Container image size

Alpine Linux runtime (includes .NET Core) 87 MB

Alpine runtime-deps (only native dependencies) 54 MB

Debian runtime (includes .NET Core) 180 MB


MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-11

Target platform Tags Container image size

Debian runtime-deps (only native dependencies) 140 MB

When using self-contained publishing, you should also consider using the intermediate language (IL)
linker NuGet package, which removes modules and types that your application does not require. In some
cases, using the IL linker can reduce package sizes by half or more. IL linker works quite well for many
applications. However, if an application uses reflection extensively, some dependent assemblies might be
removed by the IL linker. You can control this behavior by providing the IL linker with a special
configuration file.
For instructions on using the IL linker, refer to the following link
https://aka.ms/moc-20487D-m6-pg1

How to choose the publishing type?


To decide whether you should use framework-dependent or self-contained publishing, you might
consider the following:
• Final image size. With self-contained publishing, the final size of the packaged application is often
bigger because it includes the .NET Core runtime and framework assemblies, in addition to your
application’s assemblies. However, if you consider the entire footprint of both the application and the
.NET Core runtime (which needs to be installed on the machine if you are not using self-contained
publishing), and consider that the IL linker can remove unnecessary dependencies, using self-
contained publishing will produce a small final disk footprint on the target machine.

• Runtime sharing. If you deploy multiple .NET Core applications to the same machine or run multiple
containers based on the .NET Core container images, then by using framework-dependent publishing,
you can get all the application instances to share the same .NET Core runtime files and assemblies on
the disk. Furthermore, when these runtime files and assemblies are loaded into memory, they are
shared by using the operating system’s library loader, to avoid duplication across processes. On the
other hand, when using self-contained publishing, each application gets its own copy of the .NET
Core runtime files and assemblies on the disk. When you run multiple applications, these files and
assemblies are not shared in memory because they are supported by different files. This produces a
bigger disk and memory footprint.
• Platform flexibility. When using self-contained publishing, you must choose a target platform on
which your application will run. You can’t build the application for Linux operating systems on a 64-
bit Intel processor and then run it on a Windows operating system or an operating system with an
ARM-based processor. On the other hand, when using framework-dependent publishing, the
resulting build can be run by using the dotnet helper executable on any platform where the
appropriate version of .NET Core is installed.
• Minimal dependencies. When using self-contained publishing, you minimize the runtime
dependencies required for hosting your application. In fact, only a handful of native dependencies
need to be installed on the target system, such as libcurl. If your applications run in constrained
environments, or if you distribute your application to be run by others, minimizing dependencies can
be an important advantage.

• Control over the .NET Core version and servicing. When using self-contained publishing, you control
the exact version of .NET Core that will be used to run your application. There’s no risk of servicing
upgrades to the host machine breaking your deployment. On the other hand, you will not benefit
from any security or bug fixes that are deployed to the host machine’s .NET Core installation, which
you would benefit from when using framework-dependent publishing.
MCT USE ONLY. STUDENT USE PROHIBITED
6-12 Deploying and Managing Services

Reverse Proxy Servers


You commonly expect web applications and
services to handle numerous concerns such as
caching, Secure Socket Layer (SSL) termination,
and web attacks. Many web servers today use
reverse proxies to provide common features and
functionality on top of web application
frameworks, which do not necessarily have a
robust or up-to-date implementation of these
features. A reverse proxy is an intermediary
positioned between the client and the server,
which performs HTTP requests to the server on
behalf of the client and returns HTTP responses to
the client. From the client’s perspective, the reverse proxy terminates the TCP connection, so that the
actual server is never visible to the client.

Some of the common uses of reverse proxies include:

• Content caching. A reverse proxy can cache commonly retrieved resources (especially static content)
and return them to clients without making a request to the server.

• Load balancing. A reverse proxy can distribute incoming requests to a pool of several back-end
servers by either using simple load balancing rules, such as round robin, or by inspecting the HTTP
requests, URLs, and headers to determine which server should service the request.

• Web application firewall. A reverse proxy can detect and mitigate common attacks on web
applications.
• SSL termination. A reverse proxy can terminate the HTTPS requests from clients. The computing
resources required for SSL encryption are then offloaded from the web server to the reverse proxy
server.

Microsoft recommends hosting ASP.NET applications and services by using the Kestrel web server behind
a reverse proxy that forwards requests to the Kestrel web server. In addition, you must use a reverse proxy
with the Kestrel web server if you want to run multiple ASP.NET Core applications that share the same IP
and port on a single server. In this scenario, the Kestrel web server doesn’t support sharing the same IP
and port across multiple processes, which means that clients will have to use multiple ports. A reverse
proxy can look at the incoming request and route it to the appropriate Kestrel web server process, which
listens to the request on its unique IP and port combination.
Common reverse proxy software includes Nginx, Apache HTTP Server, Squid, YXORP, and IIS. Most of
these software are open source and are available under a permissive license for use in your own
environment. This lesson uses Nginx, a popular open-source web server, which can operate as a reverse
proxy to an ASP.NET Core application.

For more considerations related to hosting and deploying ASP.NET Core applications,
including reverse proxies, refer to the following link.
https://aka.ms/moc-20487D-m6-pg2
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-13

Configuring Nginx for ASP.NET Core


To configure Nginx as a reverse proxy for an
ASP.NET Core application with the Kestrel web
server, you need to provide a configuration file
(nginx.conf). This configuration file notifies Nginx
where to find the actual web server, which
headers to forward, and how to handle security
concerns.

When you have an ASP.NET Core web service


listening on the default port 5000 for a new
ASP.NET Core project, the following Nginx
configuration file configures the Nginx reverse
proxy to expose this web service on port 80.

Nginx reverse proxy configuration


http {
server {
listen 80;
location / {
proxy_pass http://127.0.0.1:5000;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}

In the preceding example, the listen directive specifies that the Nginx process should accept HTTP
requests over port 80; the proxy_pass directive specifies the address on which the ASP.NET Core
application process is listening; and the proxy_set_header directives include headers that can be used by
the web server; for example to determine the client’s real IP address.

For an example of using two Docker containers to run an ASP.NET Core application and an
Nginx reverse proxy, refer to the following link.
https://aka.ms/moc-20487D-m6-pg3
Note: When you use the ASP.NET Core authentication middleware, you need to use the
UseForwardedHeaders method to forward the X-Forwarded-For and X-Forwarded-Proto
headers. The ForwardedHeaders middleware needs to run before the authentication
middleware. For example, this middleware updates the Request.Scheme property with the value
from the X-Forwarded-Proto header, which might be https://, although the actual request from
the reverse proxy to the web server was performed by using a plain HTTP connection.

For more information about the Forwarded Headers middleware, refer to the following link.
https://aka.ms/moc-20487D-m6-pg4
MCT USE ONLY. STUDENT USE PROHIBITED
6-14 Deploying and Managing Services

Demonstration: Deploying an ASP.NET Core Web Service with Nginx


In this demonstration, you will learn how to deploy ASP.NET Core application to Nginx web server.

Demonstration Steps
You will find the steps in the “Demonstration: Deploying an ASP.NET Core Web Service with Nginx“
section on the following page. https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-
Azure-and-Web-Services/blob/master/Instructions/20487D_MOD06_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-15

Lab A: Deploying an ASP.NET Core Web Service on Linux


Scenario
In this exercise you will deploy an ASP.NET Core web service on Linux.

Objectives
After completing this lab, you will be able to:

• Deploy an ASP.NET Core Web API service to a Linux Nginx web server.

• Configure Nginx wbe server as reverse proxy.

• Create a new slot in Azure Web Apps.

• Publish new version to the staging slot.

• Swap between production and staging slot.

• Create an API Management in Azure Portal.


• Configure the API Management for your service.

• Test the API with cache rules.

Lab Setup
Estimated Time: 30 minutes

You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD06_LAB_MANUAL.md.

You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD06_LAK.md.

Exercise 1: Publishing the ASP.NET Core Web Service for Linux


Scenario
Build an ASP.NET Core Docker container.

Exercise 2: Configure Nginx as a Reverse Proxy


Scenario
Create another container for Nginx web server and use it as a reverse proxy.
MCT USE ONLY. STUDENT USE PROHIBITED
6-16 Deploying and Managing Services

Lesson 3
Continuous Delivery with Visual Studio Team Services
In the previous lessons, you learned how to use web deployment techniques to deploy your application
both on-premises and to Azure. However, there are some questions you might want to answer before you
start using the deployment techniques:

• When are you going to deploy your applications?

• Will you deploy to your source control after each check-in or on demand?

• Will you deploy only after the code passes unit tests?

• Will you deploy every couple of days, or deploy nightly to have an up-to-date testing environment
the following day?
• Will you manually build, test, and deploy the application every time or use automated, scheduled
tasks?

Continuous delivery is a software development approach that answers some of these questions, if not all.
If used correctly, it can help you increase the quality of your application.

In this lesson, you will learn the benefits of using continuous delivery and how to use continuous delivery
with Azure and with source control management systems, such as Git and Team Foundation Server (TFS).

Lesson Objectives
After completing this lesson, you will be able to:
• Describe the benefits of continuous delivery.

• Explain the principles of continuous delivery.

• Describe how to configure a continuous integration (CI) build with Visual Studio Team Services.
• Explain how to configure a continuous delivery pipeline with Visual Studio Team Services.

• Explain how to use continuous delivery with TFS and Git.

Benefits of Continuous Delivery


Continuous delivery is a software development
approach that entails releasing every stable
version of your product to the production
environment. That is, as soon as you are confident
that your product is of sufficient quality, you can
release it to the real-world users. You must
determine how frequently this happens: once a
month, twice a week, or even multiple times
during a single day.

By delivering continuously, you gain the following


benefits:

• Reduce the time that is required for users and


customers to see improvements in your application.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-17

• Increase the confidence of your development teams through the need to maintain a high-quality
product constantly.

• Reduce the overall risk of developing a complex software product by using automated tools.

Continuous Delivery Principles


When you apply the continuous delivery approach
to your product, you set up a pipeline that applies
to all code changes that you make to the product.
This pipeline usually includes the following:

• Building the product

• Running unit and integration tests

• Deploying the product to a staging


environment and running functional tests
Because it would be impractical to have a human
perform all the previous steps for every code
change, continuous delivery implies the use of
automation, which involves:

• Triggering automated builds on every code change to a source-control repository.


• Requiring that a successful compilation of the product be followed by a 100% pass rate of all unit and
integration tests.

• Setting up of virtual machines for use as staging environments.

• Running installation or deployment packages for setting up the product.

Configuring a CI build with Visual Studio Team Services


To configure a CI build, go to your Visual Studio
Team Services project website, navigate to the
Build and Release tab, and then click New.

The following screenshot shows the Build and


Release tab on the Visual Studio Team Services
project website.

FIGURE 6.5: A SCREENSHOT OF THE BUILD AND RELEASE TAB.


MCT USE ONLY. STUDENT USE PROHIBITED
6-18 Deploying and Managing Services

Choose your source location, and then click Next.

The following screenshot shows the Select a source page on the Visual Studio Team Services project
website.

FIGURE 6.6: A SCREENSHOT OF THE SELECT A SOURCE PAGE.


Select the ASP.NET Core template, and then click Apply.
The following screenshot shows the Select a template pane.

FIGURE 6.7: THE SELECT A TEMPLATE PANE


The build definition has multiple steps:

1. Restore. Download NuGet packages.

2. Build. Build the solution.


3. Test. Run the unit test.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-19

4. Publish. Create the deployment package.

5. Publish Artifacts. Save the build output in Visual Studio Team Services.

You can add more tasks to the build steps.

The following screenshot shows the tasks under a CI build.

FIGURE 6.8: TASKS UNDER A CI BUILD


To successfully run the publish steps, select Publish, clear the Publish Web Projects check box, and
provide the details in the Path to project(s) box.
The following screenshot shows the various input parameters required in the Publish task.

FIGURE 6.9: INPUT PARAMETERS FOR THE PUBLISH TASK


To trigger the build every time someone commits a change to the source code, click the Triggers tab,
and then select the Enable continuous integration check box.
The following screenshot shows the events on the Triggers tab required to trigger a build.

FIGURE 6.10: A SCREENSHOT OF THE EVENTS ON THE TRIGGERS TAB.


After you have finished configuring the build, click Save & queue. This will save the settings and start a
new build.
MCT USE ONLY. STUDENT USE PROHIBITED
6-20 Deploying and Managing Services

Configuring a Continuous Delivery Pipeline with Visual Studio Team


Services
To configure a continuous delivery pipeline, go to
your Visual Studio Team Services project website,
select the Build and Release tab, select the
Releases tab, and then click New definition.

The following screenshot shows the Release tab


in the Build and Release hub of a Visual Studio
Team Services project.

FIGURE 6.11: THE RELEASE TAB IN THE BUILD AND RELEASE HUB.
Select the Azure App Service Deployment template and then click Apply.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-21

The following screenshot shows a list of deployment templates.

FIGURE 6.12: A SCREENSHOT SHOWING A LIST OF DEPLOYMENT TEMPLATES FROM


WHICH TO CHOOSE.
Select the artifact from the Artifacts section on the left, click the Source (Build definition) drop-down
arrow to select the build that was created in the previous topic, and then click Add.

The following screenshot shows the New Release Definition screen.

FIGURE 6.13: A SCREENSHOT OF THE NEW RELEASE DEFINITION SCREEN.


MCT USE ONLY. STUDENT USE PROHIBITED
6-22 Deploying and Managing Services

Move to the Tasks tab, select your Azure subscriptions by using the Azure subscription drop-down
arrow, and then select the App service name by using the App service name drop-down arrow to deploy
your application.

The following screenshot shows the input parameters on the Tasks tab required to deploy an application.

FIGURE 6.14: A SCREENSHOT OF THE INPUT PARAMETERS ON THE TASKS TAB.


Now every time someone commits a change, the build will run, which will deploy a new version.

Demonstration: Continuous Delivery to Websites with Git and Visual


Studio Team Services
This demonstration shows how to set up a Visual Studio Team Services project with continuous
integration to an Azure Web App.

Demonstration Steps
You will find the steps in the “Demonstration: Continuous Delivery to Websites with Git and Visual Studio
Team Services“ section on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD06_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-23

Lesson 4
Deploying Applications to staging and Production
Environments
By now, you have learned how to use Web Deploy and continuous delivery to automate the deployment
process of your application, but there is more to deployment than just making sure the target server has
the same version of the new application. For example, when you deploy more than one web application
to a web server, there are steps you can take to improve the way these applications run side-by-side. In
addition, when you deploy a new application to an existing environment, especially to production
environments, you have to consider how the deployment process itself will affect users that are currently
trying to use your application. Will the application still be able to respond to requests while being
updated? Will its throughput be affected when servers are down for deployment?

In this lesson, you will learn about additional tools and techniques that can assist you in deploying
applications to staging and production environments.

Lesson Objectives
After completing this lesson, you will be able to:
• Explain the benefits of deploying your application to the staging environment.

• Explain how to deploy your applications to the staging and production environments in Azure.
• Describe the deployment strategies for each application.
• Describe how to use deployment slots with Azure Web Apps.

• Describe the advantage of configuring your applications in cloud by using the application settings.

Production and Staging Environment


Just as you should never move web applications
to a production environment without first testing
them in a testing environment, you should not
deploy web applications to a production
environment without first testing them in a
production-like environment.

To test your application in a production-like


environment, also called the staging environment,
you first need to create that environment. The
cost of creating and maintaining such
environments can be high. Azure offers great
support in this area.

In the next topic, you will be introduced to the Azure App Services slots that enable you to create and
delete production-like environments quickly.

Staging environment benefits

• Validate your application in the production-like environment. This is an important step to check if
everything is configured correctly and it works in the Azure environment.
MCT USE ONLY. STUDENT USE PROHIBITED
6-24 Deploying and Managing Services

• Warm application before it goes to production. Services usually uses caches and connections to
databases, hence warming the services will make it more responsive and users will not be affected by
the new version.

• No request dropping. Requests are initiated and completed in the production environment before
the version upgrade takes place.

• No downtime between the switch of versions. Switching between production and staging with new
version is fast, users will not be affected by the new version.

• Ability to switch back quickly to the previous working version. Even after checking the application in
testing and staging environments, there still can be production-related problems for scenarios that
can't be reproduced in these environments. Therefore, retaining the previous production version
enables you to switch back quickly to the previous working version.

Staging Environment in the Cloud


From the data center perspective, the same
hardware and software specification is used for
both the production and the staging
environments. However, you can have different
hardware and software configurations for each
environment, depending on the service
configuration that you use. For example, you can
deploy a service to a production environment by
using four extra-large instances but deploy the
same service to a staging environment that only
has two medium-sized instances. The only
difference between the two environments is the
service URL and the virtual IP (VIP) that are used to access the external endpoints of each environment.
Staging environments use a ’-stage’ suffix for the service URL and have a different VIP than that of the
production environment.

Another thing that you can use the staging environments for is to swap VIP addresses. With swap VIP
addresses update, the virtual IP and Domain Name System (DNS) address of your staging and production
environments are swapped. This results in your production environment having the address and VIP of the
staging environment and vice-versa.

By creating a staging environment that has the same hardware and software configuration as your
production environment, you can use the swap VIP addresses update to upgrade your production
environment quickly without experiencing the downtime of upgrade domains.

Note: If you have a single instance in your production environment in Azure, performing
an in-place upgrade disables the instance during the upgrade. Using multiple instances, which is
the recommendation for production environment to achieve 99.95% availability, provides the
required availability of your service, but reduces the throughput of the service because of the
downtime of instances in the domain being upgraded.

To perform the swap VIP addresses update, follow these steps:

1. Deploy the upgraded web application to the staging environment. Use the same virtual machine size
and number of instances as you use for your production environment.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-25

2. Verify that your application works correctly in the staging environment. You might have to change
the service URL you are using in the client application to point to the staging environment instead of
the production environment.

Note: A swap VIP addresses update requires having both production and staging
environments deployed. If you only have the staging environment deployed, you will not be able
to use the swap VIP addresses update option.

3. In the Microsoft Azure portal, click App Services, select the service deployment, and then on the
Overview blade, click Swap. In the Swap dialog box, choose the Source and Target slots, and then
click OK.

Note: After you complete the swap VIP addresses update, and no longer require the
staging environment instances, delete the staging deployment to conserve CPU hours.

For more information about staging and production environments, refer to the following link

Manage Cloud Services in the Azure portal


https://aka.ms/moc-20487D-m6-pg13

Set up staging environments in Azure App Service


https://aka.ms/moc-20487D-m6-pg14

For more information about deploying to Azure, refer to the following link.
https://aka.ms/moc-20487D-m6-pg7

Deployment Strategies
Deployment needs to be planned for all versions
of each application. This topic discusses about the
various deployment considerations.

Downtime.
When deploying applications to the production
environment, zero downtime is crucial for user
experience. Therefore, planning a deployment for
a new version involving database schema changes,
and any change that can affect the current version
in production, is necessary. Simulating the
deployment process in a separate environment
can help find issues that weren’t considered
earlier.

Multi-phase swap.
Even after simulating the deployment in the testing and staging environments, errors can occur in the
production environment. For that same reason, you want to be able to roll back quickly to the last
MCT USE ONLY. STUDENT USE PROHIBITED
6-26 Deploying and Managing Services

working version. Multi-phase swap ensures that as long as you don’t validate the new version, the last
version remains unchanged for the rollback option. After the approval of the new version, the previous
version is free to be used in the next staging version.

Auto-swap.
Deploying a large change to the application is a very complicated task, and it's hard to test. Therefore,
deploying a lot of small changes is a lot easier for development and testing. To make the process easier,
you need a fast, automatic, smooth, and safe deployment process. The Azure App Services offer an auto
swap feature that allows automatic deployment of the application to the production environment after
any changes that were deployed to the staging environment.

Demonstration: Using Deployment Slots with Web Apps


In this demonstration, you will create staging slot in Azure Web Apps the publish updated service to the
staging slot and swap between the production and staging slot to release a new version of the service.

Demonstration Steps
You will find the steps in the “Demonstration: Using Deployment Slots with Web Apps“ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD06_DEMO.md.

Configuration in the Cloud


One of the common tasks, when you deploy web
applications to a different environment, is
changing the configuration because some
configurations differ between environments. For
example, different environments usually use
different database connection strings; production
environments usually change the compilation
mode from Debug to Release; and in the
development environment, you might want to see
the original errors, whereas, in other
environments, you would probably choose to hide
them and show a custom error page.
One of the options for configuration in the Azure App Services is to use the application settings
option. This option is preferred because it separates the configuration-like connection strings from the
code and enables switching the configurations between environments quickly.

For more information about Azure App Services configuration, refer to the following link.
https://aka.ms/moc-20487D-m6-pg8

When swapping between slots in the Azure App Service, Azure automatically swaps the settings between
the slots. There is an option to make a specific setting to stick with the slot at the time of swapping.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-27

Lab B: Deploying to Staging and Production


Scenario
In this lab, you will deploy an ASP.NET Core application to both staging and production slots, and perform
a swap between the environments.

Objectives
After completing this lab, you will be able to:

• Create a new slot in Azure Web Apps.

• Publish new version to the staging slot.


• Swap between production and staging slot.

Lab Setup
Estimated Time: 20 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD06_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD06_LAK.md.

Exercise 1: Deploying the Application to Production


Scenario
Create a new Azure Web App and deploy an ASP.NET Core application to it.

Exercise 2: Create a Staging Slot


Scenario
Create a new slot for the web app in the Azure portal and deploy a new version of the application to this
slot.

Exercise 3: Swapping the Environments


Scenario
Switch between the production and the staging slot to release the new version to production.
MCT USE ONLY. STUDENT USE PROHIBITED
6-28 Deploying and Managing Services

Lesson 5
Defining Service Interfaces with API Management
We first discussed the OpenAPI specifications in module 3, “Creating and Consuming ASP.NET Core Web
APIs”, lesson 5, “Automatically Generating HTTP Requests and Responses”. The idea of defining an
interface or a contract to your HTTP services in a vendor-agnostic language with OpenAPI is extremely
valuable to creating interconnected applications like in microservices architecture. One of the advantages
of a well-defined API specification is the ability to provide additional layers on top of APIs, providing
documentation endpoints, error handling, security policies, throttling (quotas), and other services.

API Management is a hosted platform that provides numerous services on top of APIs that you host
yourself, or APIs hosted on other Azure services (including Azure App Service). In this lesson, you will learn
how to use the API Management and OpenAPI to provide robust, secure, and reliable APIs to your
customers.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe API Management and OpenAPI.


• Explain how to import OpenAPI policies into API Management instances.

• Describe how to publish and test APIs.

• Describe how to configure API policies for throttling, security, and other configurations.
• Limiting call rates using API Management.

API Management and OpenAPI


Open and well-defined API specifications are
changing the world by making it possible for
services from independent vendors to reliably talk
to each other. The whole is bigger than the sum
of its parts: you can create useful applications by
leveraging service APIs provided by other parties,
and you can make a successful business providing
service APIs to other parties. Managing a
successful API involves a lot of recurring tasks:
security, quotas, API keys, versioning, automatic
documentation, and others.

API Management provides a hosted, cloud-scale


platform for publishing your service APIs to other developers. It is focused on providing layers of
additional services on top of your API; API Management is not a platform for building APIs (like ASP.NET
Core) or hosting APIs (like Azure App Service).

The services provided by API Management include the following:


• Routing. API Management routes HTTP requests made by external clients to your backends (on Azure
or anywhere else).

• Authentication. API Management can verify API keys for using your APIs, and also other credentials
such as client certificates.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-29

• Caching. API Management can cache API responses if so configured (e.g. for rarely-changing GET
requests) and return them to clients without consulting your back-end services.

• Analytics. API Management logs the API calls performed to your service so that you can analyze its
performance and behavior.

• Quotas. API Management can enforce usage quotas and rate limits that you specify, and return the
appropriate errors to clients without putting additional load on your back-end services.

• Transformations. API Management can transform requests and responses on the fly, which is very
useful when multiple versions of your API must continue to be accessible to clients.
Although you can use the API Management portal to create API operations manually, API Management
supports (and uses internally) the OpenAPI specification. As a result, if you already have a well-defined
OpenAPI specification that describes your service (as you should), it will be very easy to get started with
API Management by importing that specification and then defining your API Management configuration
on top of it.

The API Management architecture consists of the following components:

• API Management instance. The API Management instance is a hosted, scalable endpoint that receives
HTTP requests from clients and forwards them as necessary to your API back ends.

• API Management publisher portal. The API Management publisher portal is a hosted web portal that
you use to manage the API Management instance—create new API methods, configure
authentication, throttling, and other features. In recent releases, a lot of functionality from the stand-
alone API Management publisher portal was introduced directly from the main Azure portal, under
the API Management service blades.
• API Management developer portal. The API Management developer portal is a hosted web portal
that your clients (developers) use to read the documentation on your API, try it out, and get the API
keys for accessing your API.

API Management Products, APIs, and Policies


To get started with API Management, you create
an API Management instance (also known as
service). The instance contains APIs, which are
grouped into products. Each API is an individual
back-end service, which clients can invoke by
making HTTP requests. A product groups together
multiple related APIs along with a usage quota.
Finally, APIs are comprised of operations, which
are combinations of URL suffixes and HTTP
methods—in other words, endpoints exposed by
the back-end service.

As an example, you might have the following


hierarchy APIs:

• API Management instance: Blue Yonder, blueyonder.azure-api.net

o API product: Flight Reservations (Mobile App Developers)


 API: Flights, /flights
 Operation: GET, /
MCT USE ONLY. STUDENT USE PROHIBITED
6-30 Deploying and Managing Services

 Operation: GET, /{flight-id}


 API: Available Seats, /seats
 Operation: GET, /{flight-id}
 Operation: PUT, /reserve
 API: Reservations, /reservations
 Operation: PUT, /
 Operation: DELETE, /{reservation-id}
o API product: Flight Reservations (Travel Agencies)
 API: Flights, /flights
 Operation: GET, /
 Operation: GET, /{flight-id}
 API: Available Seats, /seats
 Operation: GET, /{flight-id}
 Operation: PUT, /reserve
 API: Reservations, /reservations
 Operation: PUT, /
 Operation: DELETE, /{reservation-id}
 API: Group Reservations, /reservations/groups
 Operation: PUT, /
 Operation: DELETE, /{reservation-id}
 API: Discounts, /discounts
 Operation: GET, /{organization-id}
 Operation: PUT, /
 Operation: POST, /{organization-id}
In the preceding example, there are two API products: one for mobile app developers, and one for travel
agencies. The second product has more APIs (e.g. for making group flight reservations), and can be
associated with a higher usage quota.

Note: The preceding API is not a good example of REST API design best practices. It is
provided only as an illustration that highlights the hierarchy of API Management concepts.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-31

The following screenshot illustrates the first step in creating an API Management instance. You need to
specify the service name, location, organization name, the pricing tier, and other details.

FIGURE 6.15: A SCREENSHOT OF THE API MANAGEMENT SERVICE WINDOW.


After creating the API Management instance, you can either import an existing OpenAPI specification
(including from an online resource), import a SOAP API specification, import an API specification from an
Azure app (e.g. Azure Function App), or create your own APIs one-by-one.

For more information on creating an OpenAPI definition for Azure Function Apps, check the
following link.
https://aka.ms/moc-20487D-m6-pg9

When creating a new API, you specify the back-end service address for that API, and then add at least one
operation. The operation has an HTTP method (e.g. GET, POST), and can have a constant or a variable
(parameterized) URL. For example, the URL /flights/{flight} is a parameterized URL where {flight} will be
replaced with a flight identifier, such as “BY005”.
MCT USE ONLY. STUDENT USE PROHIBITED
6-32 Deploying and Managing Services

The following screenshot illustrates the process of creating a new API.

FIGURE 6.16: A SCREENSHOT OF THE CREATE A BLANK API WINDOW.


The following screenshot illustrates the process of adding a new operation to an existing API.

FIGURE 6.17: A SCREENSHOT OF THE ADD OPERATION TAB.


You can test your API from within the API Management publisher portal. The portal will invoke the HTTP
request for you, and display the resulting response. When testing, you do not need to provide API keys or
subscribe to products, because you are the administrator of the API, and your identity is subscribed to all
the API products in the API Management instance.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-33

The following screenshot illustrates the process of testing an API operation by invoking it from the API
Management portal.

FIGURE 6.18: A SCREENSHOT OF THE TEST TAB IN THE API MANAGEMENT PORTAL.
The following screenshot illustrates the HTTP request and response as shown in the API Management
portal.

FIGURE 6.19: A SCREENSHOT SHOWING AN HTTP REQUEST AND THE RESULTING


RESPONSE.
Finally, after your APIs are in order, you create a product that groups a collection of APIs into a cohesive
unit to which other parties can subscribe. When a developer gets an API key from the API Management
developer portal, this API key is associated with a specific product, and only the APIs that belong to that
product can be used with that API key.
MCT USE ONLY. STUDENT USE PROHIBITED
6-34 Deploying and Managing Services

The following screenshot illustrates the process of creating a new API product.

FIGURE 6.20: A SCREENSHOT OF THE ADD PRODUCT WINDOW.

For more information on creating and importing APIs, see the API Management
documentation:
https://aka.ms/moc-20487D-m6-pg10

After publishing your API, third parties can use the API Management developer portal to browse your
available APIs and API products, test them from the browser, subscribe to get API keys, and interact with
them. There are even automatically-generated code samples in various languages (C#, Java, Python, and
others) for interacting with your API. The API Management developer portal is a standalone web
application, which is hosted by your API Management instance.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-35

The following screenshot shows the API Management developer portal.

FIGURE 6.2: A SCREENSHOT OF THE API MANAGEMENT DEVELOPER PORTAL.

Demonstration: Importing and Testing an OpenAPI Specification


In this demonstration, you will learn how to create a new Azure API Management application and import
an existing OpenAPI specification then test the API.

Demonstration Steps
You will find the steps in the “Demonstration: Importing and Testing an OpenAPI Specification“ section on
the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/blob/master/Instructions/20487D_MOD06_DEMO.md.

API Management Policies


API Management is not just a wrapper on top of
your back-end APIs and operations. It has a
powerful policy system that can be executed on
each incoming request and outgoing response.
There are many policies available out of the box,
and they are specified using an XML document.
You can specify policies at multiple scopes, which
will automatically apply all the relevant policies to
an operation that is invoked by a client. For
example, you can specify a policy at the global
level, at the product level, at the API level, and at
the individual operation level. You can specify
most policies at any scope.

Some of the commonly used policies include:

• Limit call rate. Restricts API usage to a specific number of calls per interval.
MCT USE ONLY. STUDENT USE PROHIBITED
6-36 Deploying and Managing Services

• Cache. Stores a cached response and returns it to the subsequent callers when appropriate.

• Rewrite URL. Converts a URL from its public form to what the backend expects.

• Find and replace string in body. Modifies the request body by performing a string
replace operation.

For a complete list of policies and what they can be used for, refer:
https://aka.ms/moc-20487D-m6-pg11

To specify a policy, you provide the appropriate policy XML definition in the API Management publisher
portal. There is also a simplified form-based editor for common policy definition tasks, such as adding or
removing headers and caching responses.
The following screenshot shows the XML policy editor at the operation scope.

FIGURE 6.3: THE XML POLICY EDITOR


The following policy definition XML specifies that each subscription is allowed to perform two API calls in
each 60-second interval:

Policy definition XML


<policies>
<inbound>
<base />
<rate-limit-by-key calls="2" renewal-period="60"
counter-key="@(context.Subscription.Id)"
/>
</inbound>
<backend>
<base />
</backend>
<outbound>
<base />
</outbound>
<on-error>
<base />
</on-error>
</policies>
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-37

In the preceding code example, the rate-limit-by-key policy was placed in the inbound policy section,
which means it will be evaluated prior to calling the service backend. The key specifies a renewal period of
60 seconds, and a limit of two calls per 60 seconds per subscription.

For more information on policy expressions, such as the @(context.Subscription.Id); refer to


the code example above.

https://aka.ms/moc-20487D-m6-pg12

The following snippet is the HTTP response (including headers) when the rate limit for a service operation
has been exceeded.

HTTP response after rate limit exceeded


Retry-After: 51
Ocp-Apim-Trace-Location: ...
Date: Wed, 25 Apr 2018 12:38:16 GMT
Content-Length: 84
Content-Type: application/json

{
"statusCode": 429,
"message": "Rate limit is exceeded. Try again in 51 seconds."
}

Demonstration: Limiting Call Rates Using API Management


In this demonstration, you will learn how to configure a call rate limit for an API in API Management.

Demonstration Steps
You will find the steps in the “Demonstration: Limiting Call Rates Using API Management“ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD06_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
6-38 Deploying and Managing Services

Lab C: Publishing a Web API with Azure API Management


Scenario
In this lab, you will publish the flight booking web API service behind Azure API Management, which will
allow partners and third-party developers to use the service in a managed fashion with monitoring,
throttling, caching, and additional policies.

Objectives
After you complete this lab, you will be able to:

• Create an API Management in Azure Portal.


• Configure the API Management for your service.

• Test the API with cache rules.

Lab Setup
Estimated Time: 30 minutes

You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD06_LAB_MANUAL.md.

You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD06_LAK.md.

Exercise 1: Creating an Azure API Management Instance


Scenario
Create an Azure API Management in the Azure Portal and add the product and operation and
information.

Exercise 2: Testing and Managing the API


Scenario
Test the API under different policies like caching and rate limit.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 6-39

Module Review and Takeaways


In this module, you learned how to use Web Deploy with various tools, such as Visual Studio 2017, IIS
Manager, and PowerShell to deploy your web applications to on-premises servers and to Azure. You also
learned how to apply best practices when you deploy to production environments, and how automated
builds and continuous delivery can improve the overall quality of your application.

Best Practices
• If you are developing a service that is hosted under IIS, incorporate Web Deploy into your
deployment process.

• Use MSDeploy or the Web Deploy PowerShell snap-in when you deploy web applications through
scripts, instead of using tools such as XCOPY.
• Check whether your SCM supports automated build and use them. If it does not provide automated
builds, evaluate external third-party automated build tools, or consider switching to an SCM system
that does have automated builds.
• Deploy to the staging environment in Azure before you deploy an updated version to your
production environment.

Review Question
Question: What are the tools that use the Web Deployment Framework?

Tools
• Visual Studio 2017
• IIS

• Web Deploy

• Windows PowerShell
• Microsoft Azure

• Visual Studio Team Services


MCT USE ONLY. STUDENT USE PROHIBITED
6-40 Deploying and Managing Services
MCT USE ONLY. STUDENT USE PROHIBITED
7-1

Module 7
Implementing Data Storage in Azure
Contents:
Module Overview 7-1

Lesson 1: Choosing a Data Storage Mechanism 7-3


Lesson 2: Accessing Data in Azure Storage 7-7

Lab A: Storing Files in Azure Storage 7-15

Lesson 3: Working with Structured Data in Azure 7-16


Lab B: Querying Graph Data with Azure Cosmos DB 7-30
Lesson 4: Geographically Distributing Data with Content Delivery Network 7-31

Lesson 5: Scaling with Out-of-Process Cache 7-39


Lab C: Caching Out-of-Process with Azure Redis Cache 7-44
Module Review and Takeaways 7-45

Module Overview
Storage services is an important concept in cloud computing. Due to the volatile nature of cloud
computing, a single source of truth is needed to maintain consistency of application data and static
resources. For this reason, most (if not all) cloud platforms have a storage solution providing a persistence
store in the cloud.

Azure provides multiple storage services for various purposes:


• Microsoft Azure Blob storage. This provides a file-based persistence store. It is ideal for saving files
and static content.

• Microsoft Azure Files share. This provides a distributed file system that can be accessed via the
Server Message Block protocol from Windows and UNIX-like operating systems.

• Microsoft Azure SQL Database. This provides a fully featured relational store.

• Microsoft Azure Cosmos DB. This provides a fully featured NoSQL solution like key-value,
document, columnar and graph data stores.
• Microsoft Azure Cache for Redis. This provides a key/value store for fast access.

You can access all these storage services through the various client SDKs or directly by using their HTTP-
based APIs. Microsoft Azure Storage provides an out-of-the-box solution for common data storage
challenges such as securing and transferring a large amount of data.

Note: The Microsoft Azure portal UI and Azure dialog boxes in Visual Studio 2017 are
updated frequently when new Azure components and SDKs for .NET are released. Therefore, it is
MCT USE ONLY. STUDENT USE PROHIBITED
7-2 Implementing Data Storage in Azure

possible that you will notice some differences between the screenshots and steps shown in this
module and the actual UI you work within the Azure portal and Visual Studio 2017.

Objectives
After completing this module, you will be able to:

• Describe the architecture of Storage.


• Control access to your Storage items.

• Cache data using Azure Cache for Redis.

• Distribute data by using Microsoft Azure Content Delivery Network.


MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-3

Lesson 1
Choosing a Data Storage Mechanism
Modern applications store and manipulate many types of data from files to data structures. Choosing the
right storage for each data type is a key issue in modern application development. This lesson guides you
on how to choose the right storage for each type of data.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe the difference between blob and file storage.

• Describe the difference between Relational and NoSQL database.

• Describe the difference between SQL Database and Azure Cosmos DB.
• Describe the difference between Azure Cache for Redis and Content Delivery Network.

Storage: Blob storage, Azure Files


Azure provides different storage services intended
for different scenarios:
• Blob storage. This type of storage is a non-
structured collection of objects that can be
accessed by using a resource identifier. It can
be used for storing files, such as images,
videos, large texts, and other non-structured
data.
• Azure Files. This type of storage provides a
distributed file system, which can be mounted
as a network share (or network file system) on
Windows and UNIX-like operating systems.
Applications can then use traditional file system APIs to access the files.

Comparison of Storage
Storage has different capabilities. The following table illustrates some of its attributes:

Storage Type Access Mechanism Size

Blob storage Files HTTP-based APIs with storage 4.75 TB per block blob, 8 TB
client abstraction and per page blob
Windows file I/O API

Azure Files Files Operating system file I/O APIs 1 TB per file, 5 TB per file
or HTTP-based API share

By comparing the options in Blob storage and Azure Files, you can see that Storage offers a great deal of
flexibility with regards to sizes and access mechanisms.

Blob storage and Azure Files have built-in synchronous replication to other machines within the same
Azure datacenter.
MCT USE ONLY. STUDENT USE PROHIBITED
7-4 Implementing Data Storage in Azure

Blob storage offers a geo-replication feature also, which copies data to a second data center in the same
region (North America, Europe, or Asia). This option is enabled by default and offers better protection in
case an entire data center goes offline.

Choosing the right solution depends on the type of application and how the application works with the
data in the cloud. When choosing a solution, you need to take the following into consideration:

• Size of data

• Potential cost

• Location of the application (cloud or on-premises)

• Regulation (the type of data that will be stored)

Relational and non-relational (NoSQL) database types


Traditional relational data is stored in powerful yet
expensive Relational Database Management
Systems (RDBMS) that are based on database
engines such as Microsoft SQL Server. Database
engines provide powerful relational data
management capabilities through SQL queries,
Atomicity, Consistency, Isolation, Durability (ACID)
transactions, and stored procedures. Although
RDBMS are powerful, there is one area where they
inherently fail: scalability. This limitation, which
affects large-scale applications, is one of the
driving forces behind the NoSQL movement.
There are four types of NoSQL data stores

• Key-value
• Document

• Column

• Graph

Key-value
Key-value stores are designed to store simple data in a scalable manner. You can use them to store a large
set of structured entities at a low cost and issue simple queries to retrieve entities when required.
Key-value stores were designed for linear scale and enforces no schema to the entities stored in the table.
This means you can store different types of entities in the same table.

Key-value stores do not provide any way to represent relationships between entities and thus does not
support the join operation.

Document
Document stores are designed to store semi-structured data called documents. Documents can be JSON,
XML or YAML files. Like key-value stores, documents are represented by a key, they are designed for linear
scale, and there is no strict schema for the document. The difference is that a document can be retrieved
by their content and not only by the key.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-5

Column
Columnar stores are similar to relational databases in that they store the data in columns and rows, and by
defining the columns first. However, column stores organize the data differently such that the main order
is by column and not by row. This order makes the aggregation of the data by column super fast.

Graph
Graph stores are designed to store data in a graph structure by using nodes and edges, and the query
traverses the graph. This kind of database is best suited for finding patterns in data or to query data with a
lot of connections. Although a graph database is a NoSQL database, scaling for graph is hard but there
are some databases that offer some level of scaling.

Azure database services: SQL Database, Azure Cosmos DB


Azure provides both relational and NoSQL
databases as a service. The SQL Database service is
a fully managed relational database service that
enables you to use SQL Server without the need
for installation and configuration. Azure Cosmos
DB is a fully managed NoSQL database that offers
great scalability and availability capabilities.
There are several similarities and differences
between SQL Database and Azure Cosmos DB
database that can be considered when choosing
between the databases.
• Fully managed. The infrastructure of both is
managed by Microsoft. So, there is no need to provision the infrastructure, and there is no need to
install, configure and maintain the server.

• Data Model. Azure SQL has the traditional relational model and is best for strict schema data. But if
the data for your application needs more flexibility, Azure Cosmos DB offers four kinds of NoSQL
databases. Choosing the right model for each task can speed up development.
• Availability. The availability, configuration, and maintenance of both require minimal configuration
and administration.

• Scalability. Both can be scaled but the main difference is that Azure SQL scales well for read
operations while Azure Cosmos DB scales well for read and write operations.

For more info about SQL Database:


https://aka.ms/moc-20487D-m7-pg1

For more info about Azure Cosmos DB:


https://aka.ms/moc-20487D-m7-pg2
MCT USE ONLY. STUDENT USE PROHIBITED
7-6 Implementing Data Storage in Azure

Data distribution and caching with Azure Cache for Redis and Content
Delivery Network
Our applications manipulate a lot of data, Storing
the data close to its point of usage is important
for reducing the data transfer time in the network.
There is data that needs to be available to the
application and data that needs to be available to
the users, for this reason, Azure offers two
services.

Azure Cache for Redis


Azure Cache for Redis enables you to store and
retrieve data that is frequently used by the
application. In this way, we reduce the load on the
database and accelerate the response time of the
server.

Azure Content Delivery Network


Content Delivery Network automatically caches static data such as images and videos on servers close to
the users so that they can access them faster.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-7

Lesson 2
Accessing Data in Azure Storage
Storage introduces the Blob storage for storing files in a scalable and durable manner.

In this lesson, you will explore the Blob storage features and learn how to use them.

Lesson Objectives
After completing this lesson, you will be able to:

• Pick between Block or Page Blobs.


• Create and delete containers.

• Perform uploads, downloads, deletes, and enumerations on blobs.


• Define Retry policies.

• Create a file share

Working with Azure Blob storage


Blob storage is used for storing any type of data
that contains no inherent structure. Blobs are held
in a storage account; each account can hold any
number of blobs where each blob may contain
multiple terabytes (based on its type). The
accumulative size of all blobs in a single storage
account may be up to 500 terabytes. Data in blobs
can be exposed publicly to anyone with internet
access or privately for our own application.

You can find Blob storage useful for the following


scenarios:

• Storing images or documents to be served


directly to the browser.

• Storing data for centralized distribution.


• Storing audio and/or video for streaming.

• Storing data for background analysis, either by Azure hosted services or by the on-premises
application.
• Replacing existing applications’ use of file systems.

• Providing secure locations for backups and disaster recovery.

This is not a closed list and there are many more scenarios that can benefit from the use of blobs.
However, having so many objects requires some type of organization.

Blob storages are also used extensively throughout the Azure. For example, the Azure deployment
mechanism saves the deployment packages to Blob storage. These packages are also used by the
autoscaling mechanism. Diagnostics logs are also saved to cloud storage and Azure virtual machines disks
are persisted to Blob storage as well.
MCT USE ONLY. STUDENT USE PROHIBITED
7-8 Implementing Data Storage in Azure

Blob service components


Blobs are stored in containers, which belong to a Blob storage account. The hierarchy of the Blob storage
is fixed in the following manner:

• Storage account. Storage accounts are the root entities of the Blob storage. Every access to Storage
must be done through a Storage account.

• Container. Containers are the sub-entities of the Storage accounts. Each container can contain blobs.
An account can contain an unlimited number of containers. A container can store an unlimited
number of blobs.

• Blob. Blobs are the leaf of the hierarchy and represent a file of any type. There are two types of blobs:
block blobs or page blobs. The differences between block blobs and page blobs are covered later in
this lesson.

Note: The Azure SDK contains a class called CloudBlobDirectory, however, directories are
not part of the hierarchy and simply represent substrings of the blob’s name separated by /.

Each blob can be addressed, by using the schema at the following URL:
http://<storage account>.blob.core.windows.net/<container>/<blob>.

Creating and deleting containers


Blobs are grouped in containers. Containers are a mandatory part of the blob service hierarchy and every
blob in Azure is placed in a container. If you do not provide a container for a blob, it will be created in the
default container, named $. Containers contain a flat list of blobs and enforce common management
properties such as access control and user-defined metadata.
You can create a container by getting a reference to its URI and calling the Create or CreateIfNotExists
methods of the CloudStorageAccount class. Asynchronous versions of these methods are also available,
such as CreateAsync and CreateIfNotExistsAsync.
The following code shows how to create a new container.

Creating a new container


var storageClient = CloudStorageAccount.Parse(connectionString);
var blobClient = storageClient.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("MyContainer");
await container.CreateIfNotExistsAsync();

There are two types of blobs targeted for different workloads: block blobs and page blobs.

Block blobs
Block blobs are designed for streaming workloads where the entire blob is uploaded or downloaded as a
stream of blocks. The maximum size for a block blob is 4.75 TB, and it can include up to 50,000 blocks.
Splitting the blob into a collection of blocks allows you to upload a large blob efficiently by using a
number of threads that execute the upload tasks in parallel. Each block is identified by a BlockID and can
vary in size up to a maximum of 100 MB. To upload a block blob, you must first upload a collection of
blocks and then commit them by their BlockID.

Block blobs simplify large file upload over the network by introducing the following features:
• Parallel upload of multiple blocks to reduce communication time

• An MD5 hash can be attached to each block to ensure reliable transfer


MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-9

• Simple replacement of uncommitted blocks

• Automatic cleanup of uncommitted blocks


It is possible to create a new version of an existing blob by uploading new blocks or deleting existing ones
and committing all BlockIDs of the blob in a single commit operation.

The following code shows how to split a file into blocks and upload them to a block blob

Upload and commit blocks into a block blob


CloudBlobClient storageClient = new CloudBlobClient(new Uri("…"));

CloudBlobContainer container = storageClient.GetContainerReference("mycontainer");

//Get a reference to a block blob.


CloudBlockBlob blob = container.GetBlockBlobReference("myblockblob");
var blockList = new List<string>();

var fs = System.IO.File.OpenRead("MyFile.txt");
byte[] data = new byte[100];
int id = 0;
while (fs.Read(data, 0, 100) != 0)
{
using (var stream = new System.IO.MemoryStream(data))
{
string blockID =
Convert.ToBase64String(Encoding.UTF8.GetBytes((id++).ToString()));
// Upload a block
await blob.PutBlockAsync(blockID, stream, null);
blockList.Add(blockID);
}
}

//Commit the block list


await blob.PutBlockListAsync(blockList);

You can configure the ParallelOperationThreadCount and


SingleBlobUploadThresholdInBytes properties of the BlobRequestOptions object to simplify the
concurrent upload of a large file into a block blob. When a block blob upload is larger than the value
specified in the SingleBlobUploadThresholdInBytes property, the storage client breaks the file into
blocks. You can set the number of threads used to upload the blocks in parallel by using the
ParallelOperationThreadCount property.

The following code shows how to upload a large file to a block blob by using multiple threads

Parallel upload to block blob


CloudBlobClient storageClient = new CloudBlobClient(new Uri("…"));

CloudBlobContainer container = storageClient.GetContainerReference("mycontainer");

storageClient.DefaultRequestOptions.ParallelOperationThreadCount = 10;
storageClient.DefaultRequestOptions.SingleBlobUploadThresholdInBytes = 64000;
CloudBlockBlob blob = container.GetBlockBlobReference(fileName);
await blob.UploadFromFileAsync(Path.Combine(path, fileName));

Page blobs
Page blobs are designed for random-access workloads in which clients execute random read and write
operations in different parts of the blob. Page blobs can be treated much like an array of bytes structured
as a collection of 512-byte pages. Handling a page blob is similar to handling a byte array:
MCT USE ONLY. STUDENT USE PROHIBITED
7-10 Implementing Data Storage in Azure

• When creating a page blob, you specify a maximum size.

• Read and write operations are executed by specifying an offset and a range (that align to 512-byte
page boundaries)
Unlike block blobs, page blobs do not introduce a separate commit phase, so writes to page blobs
happen in-place and are immediately committed to the blob.

The maximum size for a page blob is 8 TB.

The following code shows how to upload data to page blob.

Upload data to page blob


CloudBlobClient blobClient = new CloudBlobClient(new Uri("…"));

CloudBlobContainer myContainer = blobClient.GetContainerReference("myContainer");


CloudPageBlob myPageBlob = myContainer.GetPageBlobReference("myPageBlob");

// Get some data


byte[] data = GetData();

//Create a 10 MB page blob.


await myPageBlob.CreateAsync(10 * 1024 * 1024);

await myPageBlob.WritePagesAsync(new MemoryStream(data), 0, null);

int offset = 4096;


await myPageBlob.WritePagesAsync(new MemoryStream(data), offset, null);

Reading data from page blobs can be done by using the OpenReadAsync method that lets you stream
the full blob or a range of pages from any offset in the blob, or by using the GetPageRanges method for
getting an enumeration over PageRange objects.
The following code shows how to read from page blob by using OpenRead.

Using OpenRead to read data from page blob


CloudBlobClient blobClient = new CloudBlobClient(new Uri(""));

CloudBlobContainer myContainer = blobClient.GetContainerReference("myContainer");


CloudPageBlob myPageBlob = myContainer.GetPageBlobReference("myPageBlob");

Stream blobStream = await myPageBlob.OpenReadAsync();


byte[] buffer = new byte[4096];
blobStream.Seek(1024, SeekOrigin.Begin);
int numBytesRead = await blobStream.ReadAsync(buffer, 1024, 4096);

Unlike block blobs, page blobs are not continuous so when reading over pages without any data stored in
them, the blob service will return zeroes for those pages. You can use the GetPageRangesAsync method
to get a list of the ranges in the blob that contain valid data. You can then enumerate the list and
download the data from each page range.
The following code shows how to read from page blob by using GetPageRangesAsync.

Using GetPageRangesAsync
CloudBlobClient blobClient = new CloudBlobClient(new Uri(""));

CloudBlobContainer myContainer = blobClient.GetContainerReference("myContainer");


CloudPageBlob myPageBlob = myContainer.GetPageBlobReference("myPageBlob");

IEnumerable<PageRange> pageRanges = await myPageBlob.GetPageRangesAsync();


Stream blobStream = await myPageBlob.OpenReadAsync();
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-11

foreach (PageRange range in pageRanges)


{
int rangeSize = (int)(range.EndOffset + 1 - range.StartOffset);
blobStream.Seek(range.StartOffset, SeekOrigin.Begin);
byte[] buffer = new byte[rangeSize];
await blobStream.ReadAsync(buffer, 0, rangeSize);
}

Creating retry policies


Storage is a service that applications can access over the network. Network transactions might fail due to
temporary conditions so retrying might be the right thing to do when a data access operation fails. The
Storage client library has a built-in retry mechanism that you can use to instruct your Storage client to
retry when a data access operation fails. To determine how to execute retries, the Storage client uses the
RetryPolicies namespace.

There are three policies built into the Storage client library:
• RetryPolicies.NoRetry. No retry is executed.

• RetryPolicies.LinearRetry. Retries N number of times with the same back-off interval between each
attempt.

• RetryPolicies.ExponentialRetry. Retries N number of times with an exponentially increasing back-


off interval between each attempt.

The following code shows how to use a linear retry policy.

Using the Retry policy


var storageClient = CloudStorageAccount.Parse(connectionString);
var blobClient = storageClient.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("MyContainer");
var myBlob = container.GetBlockBlobReference("file1");
BlobRequestOptions blobRequestOptions = new BlobRequestOptions();
blobRequestOptions.MaximumExecutionTime = TimeSpan.FromSeconds(10);
blobRequestOptions.RetryPolicy = new LinearRetry(TimeSpan.FromSeconds(1), 10);
await myBlob.UploadFromFileAsync("file.txt", AccessCondition.GenerateEmptyCondition(),
blobRequestOptions, null);

Not all exceptions will cause the Storage client to initiate a retry. Exceptions are classified as retryable or
non-retryable. For example, all HTTP status codes greater than or equal to 400 and lesser than 500 are
non-retryable exception statuses, which imply an inability to process the client’s request by the service
due to the request itself. All other exceptions are retryable. For example, if a client-side timeout was
triggered then it makes sense to initiate a retry.

After retryable exceptions are caught, the Storage client library evaluates RetryPolicy and decides
whether to initiate a retry. The exception will be presented to the client only if RetryPolicy determines
that there is no need to retry the operation. For example, if RetryPolicy was configured to execute three
retry attempts, the exception is rethrown to the client only when the third attempt fails.

For more information about retry policies, refer:


https://aka.ms/moc-20487D-m7-pg3

It is possible to construct custom retry policies and customize the retry algorithm to fit your specific
scenario. For example, you can set a retry algorithm per exception type. To implement a custom retry
policy, implement the IExtendedReplyPolicy interface, which determines whether to retry a specific
operation and interval until the next retry.
MCT USE ONLY. STUDENT USE PROHIBITED
7-12 Implementing Data Storage in Azure

The following code shows how to create and use a custom retry policy.

A custom Retry policy


//Custom retry policy that retries only HTTP status code 409 (conflict) errors
public class ConflictRetryPolicy : IRetryPolicy
{
public ConflictRetryPolicy(TimeSpan interval, int attempts) …
//More methods elided for clarity

public bool ShouldRetry(int retryCount, int statusCode, Exception ex,


out TimeSpan interval, OperationContext context)
{
if (retryCount >= _attempts) return false;
if ((HttpStatusCode)statusCode != HttpStatusCode.Conflict) return false;
return true;
}
};

//Use the custom retry policy


var storageClient = CloudStorageAccount.Parse(connectionString);
var blobClient = storageClient.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("MyContainer");
BlobRequestOptions options = new BlobRequestOptions()
{
RetryPolicy = new ConflictRetryPolicy(TimeSpan.FromSeconds(5), 10)

};
await container.CreateAsync(options, null);

Managing access to public and private blobs


Azure blob containers store information that
might be required to be publicly accessible or on
the contrary should be kept private. To set the
proper permissions, blob containers can be
configured for various access policies.

By default, containers are private, meaning that


only blob owners who have the storage account
credentials can access the containers. If public
access to a container and its blobs is required, you
can set the container permissions to allow public
access. This means that anybody can read the
contents of the blob without the need to
authenticate their request.
There are three possible container policies you can use:

• Full public read access. Container and blob data can be accessed for reads via anonymous requests
but enumeration of containers in the storage account is blocked. Enumeration of blobs inside a
container, however, is permitted.

• Public read access for blobs only. Blob data can be accessed for read via anonymous request but
enumeration of blobs in a container is blocked.

• Private only. All anonymous requests are blocked.


MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-13

To set a blob container policy, you have to create a BlobContainerPermissions object and set its
PublicAccess property to one of the BlobContainerPublicAccessType values. Finally, call the
SetPermissionsAsync method on the CloudBlobContainer object and pass the permissions object.

The following code shows how to set a blob access policy to Public read access for blobs only

Set Container Access policy


var storageClient = CloudStorageAccount.Parse(connectionString);
var blobClient = storageClient.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("MyContainer");
var permissions = new BlobContainerPermissions() {PublicAccess =
BlobContainerPublicAccessType.Blob};
await container.SetPermissionsAsync(permissions);

Demonstration: Accessing Microsoft Azure Blob Storage from a Microsoft


ASP.NET Core Application
In this demonstration, you will learn how to create Storage Account in the Azure portal and upload and
download blobs from it.

Demonstration Steps
You will find the steps in the “Accessing Microsoft Azure Blob Storage from a Microsoft ASP.NET Core
Application “ section on the following page: https://github.com/MicrosoftLearning/20487D-Developing-
Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD07_DEMO.md.

Creating an Azure File Share


One of the strengths of Storage is the fact that
clients can access it directly, relieving the load
from the application servers. However, the client
data should be kept private. It is unacceptable to
provide clients with the storage credentials for
obvious reasons so there should be a way to grant
specific clients the permission they need for a
short period of time.
Shared Access Signature is a short-lived URL that
grants specific access rights to storage resources
such as containers, blobs, and files for a certain
duration.
Your client can call your web service which will return a Shared Access Signature for a specific resource.
Now the client has a short window of time in which they can perform the operation you allow on a
specific resource.
The access rights granted in a Shared Access Signature define which operations can be performed on the
resource.

All the information about the granted access levels, the specific resource, and the allotted time frame is
incorporated within the Shared Access Signature URL as query parameters. In addition, the Shared Access
Signature URL contains a signature that the storage services use to validate the request.
MCT USE ONLY. STUDENT USE PROHIBITED
7-14 Implementing Data Storage in Azure

It is possible to specify all access control information in the URL or to embed a reference to an access
policy. With access policies, you can modify or revoke access to the resource if necessary.

For more information about the structure of the Shared Access Signature URL, consult MSDN
documentation:
http://go.microsoft.com/fwlink/?LinkID=298849&clcid=0x409

To share files, start by creating a file share


https://aka.ms/moc-20487D-m7-pg4

To create a shared access signature for a file, call the GetPermissionsAsync method of a CloudFileShare
object, and add the permissions in the SharedAccessPolicies property.

The following code shows how to create a shared access signature for a file.

Creating a shared access signature for a file


CloudStorageAccount storageClient = CloudStorageAccount.Parse(connectionString);
CloudFileClient fileClient = storageClient.CreateCloudFileClient();

CloudFileShare share = fileClient.GetShareReference("myshare");

FileSharePermissions permissions = await share.GetPermissionsAsync();

string policyName = "sampleSharePolicy" + DateTime.UtcNow.Ticks;

SharedAccessFilePolicy sharedPolicy = new SharedAccessFilePolicy()


{
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
Permissions = SharedAccessFilePermissions.Read
};

permissions.SharedAccessPolicies.Add(policyName, sharedPolicy);
await share.SetPermissionsAsync(permissions);

CloudFileDirectory rootDir = share.GetRootDirectoryReference();


CloudFile file = rootDir.GetFileReference("myfile");
string sasToken = file.GetSharedAccessSignature(null, policyName);
string fileSasUri = file.Uri.AbsoluteUri + sasToken;
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-15

Lab A: Storing Files in Azure Storage


Scenario
In this exercise, you will store files in Microsoft Azure Blob Storage.

Objectives
After completing this lab, you will be able to:

• Store publicly accessible files in Blob Storage.

• Generate and store private files in Blob Storage.

Lab Setup
Estimated Time: 60 minutes

You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD07_LAB_MANUAL.md.

You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD07_LAK.md.

Exercise 1: Store Publicly Accessible Files in Blob Storage


Scenario
Create a Storage account in Azure Portal and upload some files to it.

Exercise 2: Generate and store private files in Azure Blob Storage


Scenario
Implement the GetPassengerManifest service that downloads private blobs from blob storage.
Question: Blue Yonder Airlines would like to extend the Travel Companion application by using
image recognition algorithms to automatically identify interesting landmarks in uploaded photos.
How would you use Azure queues for this task?
MCT USE ONLY. STUDENT USE PROHIBITED
7-16 Implementing Data Storage in Azure

Lesson 3
Working with Structured Data in Azure
In the previous lesson, you learned how to store unstructured data in Azure Storage blobs and files.
Unstructured data storage is simple and inexpensive, but it does not yield well to efficient querying and
updating. The Azure cloud platform provides numerous services for storing, querying, and updating
structured data, including SQL Database and Azure Cosmos DB. Both services are globally available and
can scale with your application’s demands, but have slightly different constraints, use cases, and APIs.

In this lesson, we will explore Azure SQL Database, Microsoft’s cloud-optimized, scalable, globally-
distributed version of the popular SQL Server database product. Then, we will discuss Azure Cosmos DB, a
novel database service that supports multiple types of API flavors in a single distributed platform with a
choice of data consistency strategies to fit your application’s architecture and business needs.

Lesson Objectives
After completing this lesson, students will be able to:
• Create an SQL Database and access it from a web application.

• Use the different Azure Cosmos DB database API types.


• Describe the Azure Cosmos DB consistency strategies.

Working with SQL Database


SQL Database is Microsoft’s cloud-scale relational
platform-as-a-service (PaaS) database offering. If
you’re building a system that requires a relational
database, SQL Database is a great choice because
it provides a managed database platform with
predictable performance, where you don’t have to
worry about deploying virtual machines,
configuring database software, and applying
upgrades and security patches. For existing users
of SQL Server, there are migration paths to various
offerings in SQL Database, which can minimize
downtime and provide comparable costs.

SQL Database provides three deployment flavors that you can use. Choosing the right flavor depends on
your business needs, cost and performance requirements, and more. The flavors are:

Single database. You create a new database and assign it performance resources (database transaction
units (DTUs) or vCores, discussed below). The platform guarantees that your database will receive the
necessary hardware resources to support the required load.

Elastic database pool. You create a database pool and assign it performance resources (DTUs). Then, you
assign one or more databases to the pool. The databases in the pool share the pool’s resources, so if one
database maxes out the pool’s resources, other databases will be throttled temporarily. Despite this risk,
the elastic approach is useful when you have varying degrees of loads across numerous databases, and
assigning each database a high number of DTUs would be unreasonably expensive.

SQL Database Managed Instance. You create a managed instance, which is essentially a standalone
managed database server. When using SQL Database Managed Instance, you have 100% compatibility
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-17

with the on-premises version of SQL Server, but you do not have to worry about manual database
backup, upgrades, security patching, and other concerns.

Note: In addition to using SQL Database, which is a PaaS offering, you can also deploy the
on-premises version of SQL Server to an Azure Virtual Machine. By doing so, you take on the
responsibility of managing the virtual machine instance, including operating system updates,
security patches, and database upgrades. This is still a reasonable choice in some scenarios, where
you need to lift-and-shift an existing deployment into Azure and perform more fine-grained
migration steps later.

For more information about the differences between SQL Database and the on-premises
version of SQL Server (which is also available in SQL Database Managed Instances), refer:
https://aka.ms/moc-20487D-m7-pg5

You assign performance resources to databases using one of two methods: database transaction units
(DTUs) or virtual cores (vCores). The Azure platform guarantees that your database will receive the
hardware resources required to support the desired load; if you exceed your resource allotment, you
might experience query degradation and throttling (although in many cases, the database will still operate
normally). Workloads from other databases, even your own, will not affect your database -- unless you’re
using an elastic pool.
DTUs: A DTU is a combination of compute, storage, and I/O resources required to service database
operations. To determine the resources required to support a single DTU, Microsoft uses the Azure SQL
Database Benchmark (ASDB), which runs a mix of basic operations for online transaction processing
(OLTP) workloads. There are various pricing tiers with different numbers of DTUs. For example, in the P15
tier, the maximum database size is 4TB, and the maximum number of concurrent requests is 6,400. In the
S0 tier, the maximum database size is 250GB, and the maximum number of concurrent requests is 60.
vCores: Under the vCore purchasing model, you pay for compute resources (virtual cores), data storage,
and the number of I/O operations. You can independently scale the compute and storage resources,
which may provide greater flexibility than the DTU-based pricing.

For more information about SQL Database pricing tiers, refer:


https://aka.ms/moc-20487D-m7-pg6

For more information about the SQL Database Benchmark used to determine DTU
performance equivalents, and how it might relate to your actual workload, refer:
https://aka.ms/moc-20487D-m7-pg7

To evaluate the DTU requirements of your on-premises database workloads, you can use the
SQL Database DTU Calculator, which collects performance counter data from your
on-premise machine and analyzes it to produce an estimate:
https://aka.ms/moc-20487D-m7-pg8
MCT USE ONLY. STUDENT USE PROHIBITED
7-18 Implementing Data Storage in Azure

The following screenshot shows the Azure portal dialog for creating a new SQL Database:

FIGURE 7.1: THE DIALOG BOX FOR CREATING A NEW SQL


DATABASE IN AZURE PORTAL.
The following screenshot demonstrates the Azure portal dialog for creating a new SQL Database server,
which will host your database:

FIGURE 7.2: SCREENSHOT OF THE NEW SERVER DIALOG BOX


MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-19

The following screenshot shows the Overview blade in the Azure portal, with the new database’s name
and performance metrics:

FIGURE 7.3: SCREENSHOT OF THE OVERVIEW BLADE


The following screenshot shows the database connection strings, which you can use from your .NET, Java,
PHP, and other applications. To see the connection strings, click Connection strings under SETTINGS.

FIGURE 7.4: A SCREENSHOT OF THE DETAILS RECEIVED


AFTER CLICKING THE “SHOW DATABASE CONNECTION
STRINGS” HYPERLINK.
After creating a SQL Database instance, or migrating an existing database to SQL Database, you can use
the standard ADO.NET and Entity Framework libraries to connect to the database and perform operations.
You can also use familiar tools such as SQL Server Management Studio or the Visual Studio Server
Explorer to manage the database. Importantly, SQL Database does not allow external applications and
tools to connect to the database remotely; you will need to create a firewall rule that adds the required
client IP addresses so that you can use these tools to connect. By default, only services and applications
running in Azure data centers will be allowed to access the database (and even that can be disabled if
needed).

The following screenshot shows the firewall configuration dialog in the Azure portal, which you can use to
allow your client IP address access to the database (you will still need the database username and
password to authenticate):
MCT USE ONLY. STUDENT USE PROHIBITED
7-20 Implementing Data Storage in Azure

FIGURE 7.5: SCREENSHOT OF THE SQL SERVER FIREWALL


SETTINGS DIALOG BOX
The following screenshot shows the Visual Studio Server Explorer, with the SQL Databases node
expanded, showing the newly created database:

The following screenshot shows the Visual Studio SQL Server Object Explorer attached to the blueyonder
database:

FIGURE 7.7: SCREENSHOT OF VISUAL STUDIO SQL SERVER OBJECT EXPLORER


MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-21

The following screenshot shows the SQL query editor in the Azure portal, where you can run basic queries
and explore your database without leaving the browser:

FIGURE 7.8: SCREENSHOT OF THE AZURE SQL


DATABASE QUERY EDITOR
To import an existing database to SQL Database, you can use an SQL Server .bacpac file, which includes
the database schema and data in a simple JSON format. You can import the database file from an Azure
Storage blob, or from a local file on your machine.

For more information on importing a database from a .bacpac file, see:


https://aka.ms/moc-20487D-m7-pg9

Demonstration: Uploading an Azure SQL Database to Azure and Accessing


it Locally
In this demonstration, you will learn how to create a new Azure SQL Database in Azure portal and import
and test data using a .bacpac file then access it from SQL Operations Studio locally.

Demonstration Steps
You will find the steps in the “Uploading an Azure SQL Database to Azure and Accessing it Locally “
section on the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-
Azure-and-Web-Services/blob/master/Instructions/20487D_MOD07_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
7-22 Implementing Data Storage in Azure

Understanding Azure Cosmos DB API types


Relational databases offer a convenient and
familiar programming model. They also offer
transactional semantics and strong consistency:
there’s no worrying about reading data that was
not yet replicated to another node. Unfortunately,
the convenient programming model and the
strong consistency properties mean that it is very
difficult to scale relational databases horizontally,
across multiple nodes, especially if they are not
located in the same region. Although it is possible
to distribute a relational database across multiple
regions, and use various replication and sharding
strategies to carefully control data reads and writes, it often requires significant programming efforts,
which do not make sense for many systems.
Azure Cosmos DB is global-scale, non-relational database that works in multiple modes and can be easily
scaled and deployed across multiple regions. It supports numerous consistency guarantees and API
flavors, which means you can design the database to fit your application’s needs and not design your
application around the database features. When deploying the database to more than one region, your
application will automatically send requests to the region closest to it, enjoying excellent latency and
request balancing.

Creating an Azure Cosmos DB account


When you create an Azure Cosmos DB account, you choose the API type for that account. It affects the
data model and the query syntax you’ll use for that database. The supported API types include:
• SQL. Use SQL syntax over data stored in JSON documents. This is commonly referred to as a
document database.

• Mongo DB. Manipulate data using the Mongo DB APIs, compatible with existing Mongo DB client
libraries. The data is stored in JSON documents.

• Cassandra. Use the Cassandra query language and protocol to manipulate data organized in a
tabular format. This is commonly referred to as a wide column store.
• Table. Store data in a simple key-value format, mimicking Azure Storage tables.

• Graph (Gremlin). Store graph nodes and edges and use the Gremlin query language with Open
Graph APIs.

For more information about Azure Cosmos DB supported API types, refer:
https://aka.ms/moc-20487D-m7-pg10
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-23

The following screenshot shows the Azure portal dialog for creating a new Azure Cosmos DB account, and
selecting the API you would like to use:

FIGURE 7.9: THE AZURE COSMOS DB NEW ACCOUNT DIALOG BOX


The following screenshot shows a deployed Azure Cosmos DB account using the Mongo DB API:

FIGURE 7.10: THE AZURE COSMOS DB OVERVIEW DIALOG


BOX IN AZURE PORTAL.
After creating an Azure Cosmos DB account, you can configure the global distribution for your database.
You can use a database with only a single region, but you can gain more availability and performance
from scaling to more than one region. Azure Cosmos DB provides automatic failover in case a region fails
and directs client traffic (reads and writes) to the closest region that can satisfy the request. For some
consistency strategies (see below), it becomes possible to satisfy read requests immediately from a local
region without consulting replicas in other regions.
MCT USE ONLY. STUDENT USE PROHIBITED
7-24 Implementing Data Storage in Azure

The following screenshot shows the Azure portal dialog for selecting global regions for your Azure
Cosmos DB account, and configuring their read/write/read-write status:

FIGURE 7.11: THE REPLICATE DATA GLOBALLY DIALOG BOX OF


AZURE COSMOS DB

For more information on distributing data globally to multiple regions with Azure Cosmos
DB, refer:
https://aka.ms/moc-20487D-m7-pg11

You can scale Azure Cosmos DB accounts using one of two modes: fixed or unlimited. In fixed mode, your
account is limited to 10GB of storage capacity. Additionally, you configure your account with a
throughput limit of 400 - 10,000 Request Units (RU) per second. A Request Unit corresponds to a read
operation on a single 1KB document. In the unlimited mode, you can scale to an unlimited storage
capacity and a throughput limit of 10,000 - 100,000 RU/s.

For a more detailed explanation of how Request Units correlate to create, read, update, and
delete operations on documents, refer:
https://aka.ms/moc-20487D-m7-pg12

You can use the Azure Cosmos DB capacity planner to estimate the RU and data storage
requirements of your account:
https://aka.ms/moc-20487D-m7-pg13
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-25

Azure Cosmos DB with Mongo DB API


Mongo DB is a popular document database, which stores documents in JSON format and has a
convenient syntax to query, manipulate, and process documents in bulk. Mongo DB has a variety of client
libraries and integrations, which makes it a great fit for many application types. By using Azure Cosmos
DB with the Mongo DB API, you can easily scale a Mongo DB database without worrying about
installations, sharding, reliability, data distribution, and other concerns.

The following screenshot shows the Data Explorer pane in the Azure portal, which you can use to
manipulate data stored in an Azure Cosmos DB account through the Mongo DB API:

FIGURE 7.12: THE DATA EXPLORER PANE IN THE AZURE


COSMOS DB DIALOG BOX IN AZURE PORTAL.
In a .NET application, you can connect to the Azure Cosmos DB account configured with the Mongo DB
API by using the standard .NET Mongo DB driver library, which is available on NuGet. You obtain the
connection string from the Connection String pane under SETTINGS in the Azure Cosmos DB portal. The
driver is not aware that it is communicating with an Azure Cosmos DB account; from the driver’s
perspective, Azure Cosmos DB implements the Mongo DB protocol like a native Mongo DB database
deployment.

The following code accesses the Azure Cosmos DB account using the .NET Mongo DB driver library, and
retrieves all the flights from Paris:

Accessing to Azure Cosmos DB using .NET Mongo DB driver


var connectionString =
"mongodb://blueyonder:…@blueyonder.documents.azure.com:10255/…";
var client = new MongoClient(connectionString);
var flightsDB = client.GetDatabase("flights");
var flightsCollection = flightsDB.GetCollection<Flight>("flights");
var filter = Builders<BsonDocument>.Filter.Eq("source", "Paris");
var flightsFromParis = flightsCollection.Find(filter);

For more information on the Azure Cosmos DB Mongo DB API, refer:


https://aka.ms/moc-20487D-m7-pg14

Azure Cosmos DB with Graph (Gremlin) API


Graph databases are quite different from document databases, in that they store information as a
collection of nodes (vertices) and edges between nodes, i.e. a graph. Both nodes and edges can have
MCT USE ONLY. STUDENT USE PROHIBITED
7-26 Implementing Data Storage in Azure

additional properties attached. Special query languages such as Cypher and Gremlin are used to query the
graph by exploring its relationships. For example, retrieving all nodes connected by a specific relationship.
The connected nature of a graph makes it easier to express certain types of queries in a graph database
compared to a relational or document database. For example, if your graph consists of Person nodes
representing social media users, there is a Friend edge between each two friends, and there is a Follow
edge between a user and another user or page that they follow, then you can easily perform queries such
as finding mutual friends between two users, or finding the friends of a friend who are interested in a
certain page, and so on.

The following screenshot shows the Azure Cosmos DB Data Explorer for accessing data in an Azure
Cosmos DB account configured with the Graph (Gremlin) API:

FIGURE 7.13: THE DATA EXPLORER PANE IN THE AZURE


COSMOS DB DIALOG BOX IN AZURE PORTAL.
The following screenshot shows the Data Explorer with a graph that contains multiple nodes and edges,
representing cities that are connected by flights:

FIGURE 7.14: THE DATA EXPLORER PANE IN THE AZURE


COSMOS DB DIALOG BOX IN AZURE PORTAL.
You can use the Gremlin console to connect to an Azure Cosmos DB account configured with the Gremlin
API. In the Gremlin console, you can create new nodes and edges and configure them with properties,
and you can execute queries against the existing graph. The Gremlin query language consists of
numerous operators, such as V() for the entire set of nodes (vertices), E() for the entire set of edges,
has('property', 'value') for filtering edges or nodes, and many others.

To download the Gremlin Console, go to:


https://aka.ms/moc-20487D-m7-pg15
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-27

For detailed instructions on configuring Gremlin Console to connect to an Azure Cosmos DB


account, refer:
https://aka.ms/moc-20487D-m7-pg16

The following screenshot shows the Gremlin console connected to an Azure Cosmos DB account and
performing queries:

FIGURE 7.15: THE GREMLIN CONSOLE WINDOW.


The following code snippet creates multiple city nodes and flight edges between them, and then runs a
query that returns all the cities reachable with a single stopover from a source city:

Creating data by using gremlin script


:> g.addV('city').property('name', 'Paris')
:> g.addV('city').property('name', 'London')
:> g.addV('city').property('name', 'Amsterdam')
:> g.addV('city').property('name', 'Berlin')
:> g.addV('city').property('name', 'Barcelona')
:> g.addV('city').property('name', 'Stockholm')

:> g.V().has('name', 'Paris').addE('flight').to(g.V().has('name',


'London')).property('number', 'BY97')

:> g.V().has('name', 'London').addE('flight').to(g.V().has('name',


'Paris')).property('number', 'BY98')

:> g.V().has('name', 'Paris').addE('flight').to(g.V().has('name',


'Amsterdam')).property('number', 'BY91')

:> g.V().has('name', 'Amsterdam').addE('flight').to(g.V().has('name',


'Berlin')).property('number', 'BY88')

:> g.V().has('name', 'Amsterdam').addE('flight').to(g.V().has('name',


'Barcelona')).property('number', 'BY82')

:> g.V().has('name', 'Berlin').addE('flight').to(g.V().has('name',


'Stockholm')).property('number', 'BY76')

:> g.V().has('name', 'Paris').addE('flight').to(g.V().has('name',


'Berlin')).property('number', 'BY66')

:> g.V().has('name', 'Paris').out('flight').out('flight').has('name',


neq('Paris')).values('name')
==>Berlin
==>Barcelona
==>Stockholm

For more information on the Gremlin language syntax, refer:


https://aka.ms/moc-20487D-m7-pg17
MCT USE ONLY. STUDENT USE PROHIBITED
7-28 Implementing Data Storage in Azure

Demonstration: Using Microsoft Azure Cosmos DB with the


MongoDB API

In this demonstration, you will create a new Azure Cosmos DB instance with MongoDB API in Azure portal
and use a script to create collections with some objects then run some queries on the objects.

Demonstration Steps
You will find the steps in the “Using Microsoft Azure Cosmos DB with the MongoDB API “ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD07_DEMO.md.

Demonstration: Using Cosmos DB with a Graph Database API


In this demonstration, you will create a new Azure Cosmos DB instance with Graph API in the Azure portal
and use a script to create nodes and relationships then run Gremlin queries on the graph.

Demonstration Steps
You will find the steps in the “Using Cosmos DB with a Graph Database API“ section on the following
page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD07_DEMO.md.

Azure Cosmos DB consistency strategies


The Consistency, Availability, Partition (CAP)
tolerance theorem proposed by Eric Brewer affects
all modern distributed applications. The CAP
theorem states that it is impossible for a
distributed data store, such as Azure Cosmos DB,
to provide more than two of the following three
guarantees:

• Consistency. Every read receives the most


recent write or an error.
• Availability. Every request receives a non-error
response, but without the guarantee that a
read contains the most recent write.

• Partition tolerance. The system continues to function when facing network partitions, i.e. when
arbitrary messages are dropped or delayed by the network.

Because no modern distributed network is completely safe from network partitions, it follows from the
theorem that you must trade consistency for availability. In face of a network partition, your system will
either stop responding to requests (loss of availability to preserve consistency), or it might diverge into
separate views of the most recent data (loss of consistency to preserve availability).

For more information on the CAP theorem, refer:


https://aka.ms/moc-20487D-m7-pg18
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-29

One of the key effects of bringing the CAP theorem into the minds of software architects and engineers is
that distributed storage systems are now being designed with interesting consistency models and
distributed applications need to take advantage of these consistency models. Beyond just trading
availability for consistency, there are varying degrees of consistency that can have wildly different
performance characteristics. For example, strong consistency (where every read must receive the most
recent write) is quite expensive to achieve in a distributed system, and some applications might be able to
relax the consistency requirements to obtain better performance or lower costs.
Many distributed databases offer poorly-defined consistency guarantees, or only a choice between strong
and eventual (weak) consistency. Azure Cosmos DB provides five consistency models (strategies) which
you can choose from. Furthermore, you can choose a strong consistency strategy for your database and
then relax it and use a weaker strategy for specific operations where it would be beneficial for
performance. The consistency strategies supported by Azure Cosmos DB are:

• Strong. Reads are guaranteed to return the most recent version of an item. A write only becomes
visible after it is committed by a majority of replicas, and performing a read requires
acknowledgement from a majority of replicas as well. When using strong consistency, you can’t
associate your Azure Cosmos DB account with more than one region (because it would be
prohibitively expensive to consult a majority of replicas in real-time).

• Bounded Staleness. Reads lag behind writes by at most k versions of an item, or at most t seconds.
With bounded staleness, you can use more than one region. The read cost in terms of Request Units
is the same as with strong consistency.

• Session. Consistency is scoped to a single client session. It is guaranteed that the client can read its
own writes and that reads and writes are monotonic (for example, if one read in a given session
returned version 7 of an item, the next read of the same item cannot return a version earlier than 7).
The read cost in terms of Request Units is lower than with bounded staleness or strong consistency.

• Consistent Prefix. Eventual convergence of all the replicas is guaranteed if at some point writes are
stopped. Reads don’t see out-of-order writes. For example, if the write order was A, B, C, then client
reads might see A, B or A, B, C, but a client will not see B, A, C. The read cost in terms of Request Units
is the same as with session consistency.

• Eventual. Eventual convergence of all the replicas is guaranteed if at some point writes are stopped. A
client might read values older than ones it had seen before. For example, read version 7 of an item
and then read version 5 of the same item. This mode has the lowest read cost in terms of Request
Units compared to all the other options.

Although it might sound as though strong consistency is the only option for building a reliable, correct
distributed system, there are often reasons why the consistency model can be relaxed. For example, if it is
known that only a single client is updating a certain item (a specific player’s high score in a game that is
only installed on a single device), then using strong consistency is not required and session consistency
can be used instead.

For more information on the Azure Cosmos DB consistency levels, and to understand how to
choose the appropriate consistency model for your needs, refer:
https://aka.ms/moc-20487D-m7-pg19

For a more thorough explanation of various consistency guarantees in distributed data stores
through examples, refer:
https://aka.ms/moc-20487D-m7-pg20
MCT USE ONLY. STUDENT USE PROHIBITED
7-30 Implementing Data Storage in Azure

Lab B: Querying Graph Data with Azure Cosmos DB


Scenario
In this lab, you will query graph data with Azure Cosmos DB.

Objectives
After completing this lab, you will be able to:

• Create the Azure Cosmos DB graph database.

• Query the Azure Cosmos DB database.

Lab Setup
Estimated Time: 30 minutes

You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD07_LAB_MANUAL.md.

You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD07_LAK.md.

Exercise 1: Create the Azure Cosmos DB Graph Database


Scenario
Create Azure Cosmos DB graph database in Azure Portal and load some data.

Exercise 2: Query the Azure Cosmos DB Database


Scenario
Implement the GetAttractions and GetStopOvers services using Cosmos BD graph database.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-31

Lesson 4
Geographically Distributing Data with Content Delivery
Network
Scaling services so that they are operating at their optimal level for users in different countries or
continents can be a challenge. Cloud platforms such as Azure provide several features to simplify the
process of scaling.

In this lesson, you will learn about the issues that apply to services that need to scale on a global scale and
how Azure can help.

Lesson Objectives
After completing this lesson, students will be able to:

• Describe how to load balance resources by using Content Delivery Networks.


• Describe the Azure options for load balancing applications across data centers.

The need for a Content Delivery Network


By making your application available over the
Internet either with or without the use of a cloud
platform such as Azure, you can enable people all
over the world to access it. Your application may
become popular in some part of the world that is
geographically distant from where that service is
hosted. The distance, however, may lead to long
round-trip times and have a negative effect on
user experience. It is, therefore, useful to host the
application as close to the target users as possible.
The function of a Content Delivery Network is to
provide content to users with minimum latency
and maximum availability. To that end, Content Delivery Networks maintain data centers in selected
locations around the world that are meant to reduce round-trip times for users. Content Delivery
Networks are ideally suited to serving static content such as image, script, multimedia, and application
files. Because many applications contain a large number of such resources, the use of a Content Delivery
Network can be beneficial in two ways:

• For users. Static content is delivered quickly and user experience is enhanced. Long round-trip times
are only required for accessing the actual dynamic portions of the application.
• For developers. Traffic to the application’s servers is reduced to only such requests that require
dynamic content. Scalability is enhanced and costs are lowered. It is the Content Delivery Network
that bears most of the traffic for the application.

The Content Delivery Network


The Content Delivery Network is used to maintain copies of the contents of Storage blobs, static outputs
of compute instances, dynamic web application content, various types of images and video media, and
more, in sites around the world. There are three Content Delivery Network products: Azure Content
Delivery Network Standard from Akamai, Azure Content Delivery Network Standard from Verizon, and
MCT USE ONLY. STUDENT USE PROHIBITED
7-32 Implementing Data Storage in Azure

Azure Content Delivery Network Premium from Verizon. The point-of-presence (POP) locations for these
Content Delivery Network offerings include dozens of cities on every continent (except Antarctica).

For more information on the individual features supported by each Content Delivery Network offering,
refer:

Overview of the Azure Content Delivery Network


https://aka.ms/moc-20487D-m7-pg21

When creating a new Content Delivery Network endpoint, you specify the origin type for the endpoint, as
well as the resources under that origin that you would like to cache. The available origin types include:

• Storage. An Azure Storage account

• Cloud service. An Azure Cloud Service


• Web Apps: Azure Web Apps
• Custom origin: Any publicly accessible web server (which may be hosted outside of Azure)

The first time a specific object is requested from Content Delivery Network, it will be retrieved from its
origin and cached at the Content Delivery Network endpoint. It will subsequently be served directly from
Content Delivery Network. Note that differences in URL query string parameters are ignored by default
(treated as the same resource), but you can configure this behavior.

Content Delivery Network features


Content Delivery Network has several features to
store data as close as possible to the users.

Dynamic site acceleration


The basic feature of Content Delivery Network is
the ability to cache static files like images and
videos in a server close to the users. This type of
caching can be used in any website or
applications such as social media and e-commerce
that use a lot of static files.

For more information about dynamic site


acceleration, refer
https://aka.ms/moc-20487D-m7-pg22

Content Delivery Network caching rules


Content Delivery Network lets you define some rules on your data on how to cache it. For example, you
can configure it such that all resources are cached for one day. There are two types of caching rules:

• Global. One caching rule for all resources.


• Custom. One or more caching rules specifying path and file extension to apply the rule.

It's possible to provide the caching setting in the query string in the URI of the resource.

And there are three caching behaviors:


• Bypass cache. Do not cache
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-33

• Override. Ignore query string caching settings and cache with the rule provided duration.

• Set if missing. If no query string caching setting is provided, use the rule provided duration.

For more information about caching rules, refer


https://aka.ms/moc-20487D-m7-pg23

HTTPS custom domain support


Content Delivery Network enables you to secure your data using the HTTPS protocol, this ensures that the
sensitive data is encrypted when it is transferred over the network.

For more information about HTTPS custom domain support, refer:


https://aka.ms/moc-20487D-m7-pg24

Azure diagnostics logs


To analyze and tune the caching rules for the best application performance and reducing cost Content
Delivery Network allows you to export usage metrics about your Content Delivery Network usage.

For more information about Azure diagnostics logs, refer:


https://aka.ms/moc-20487D-m7-pg25

File compression
Content Delivery Network enables you to compress the files before they are sent to the users. This way
users get more a responsive experience and the network traffic is reduced to save costs.

For more information about Content Delivery Network file compression, refer:
https://aka.ms/moc-20487D-m7-pg26

Geo-filtering
Content Delivery Network enables you to restrict access to some resources from specific countries by
creating a rule.

For more information about geo-filtering with Content Delivery Network, refer:
https://aka.ms/moc-20487D-m7-pg27
MCT USE ONLY. STUDENT USE PROHIBITED
7-34 Implementing Data Storage in Azure

Using Content Delivery Network with a static website


To use Content Delivery Network with your
website create a Content Delivery Network
resource in the Azure portal.

Creating Content Delivery Network in the Azure


portal
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-35

FIGURE 7.16: THE CREATE CDN PROFILE DIALOG BOX IN AZURE PORTAL.
MCT USE ONLY. STUDENT USE PROHIBITED
7-36 Implementing Data Storage in Azure

After Content Delivery Network has been created, navigate to the Content Delivery Network blade and
add a new Endpoint. In the origin type, choose the WebApp option and in Origin hostname, select
your website URL.

The following image is a screenshot of the Add an endpoint dialog box for the created Content Delivery
Network.

FIGURE 7.17: A SCREENSHOT OF THE CONTENT DELIVERY


NETWORK PROFILE WINDOW WITH THE ADD AN ENDPOINT DIALOG BOX.
After the endpoint is created, you can select it and configure a custom domain, compression caching
rules, geo-filtering, and optimization.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-37

The following image is a screenshot of the menu options for the newly created endpoint.

FIGURE 7.18: A SCREENSHOT OF THE MENU OPTIONS FOR A


NEWLY CREATED CONTENT DELIVERY NETWORK ENDPOINT.
MCT USE ONLY. STUDENT USE PROHIBITED
7-38 Implementing Data Storage in Azure

For more information about using the Content Delivery Network dynamic site acceleration,
refer:
https://aka.ms/moc-20487D-m7-pg22

Demonstration: Configuring a CDN Endpoint for a Static Website


In this demonstration, you will learn how to create a new CDN endpoint for an existing static website in
Azure portal and test it.

Demonstration Steps
You will find the steps in the “Configuring a CDN Endpoint for a Static Website“ section on the following
page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD07_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-39

Lesson 5
Scaling with Out-of-Process Cache
Distributed cache is a basic component for implementing high scale distributed applications. Application
servers can store a large set of information in a collection of servers forming a cache cluster. The
information is stored in-memory across the cluster to provide low latency and high throughput.

This module describes Azure Cache for Redis and the API for executing data access operations.

Lesson Objectives
After completing this lesson, students will be able to:

• Describe the motivation for distributed caching.


• Describe the architecture of Azure Cache for Redis.

• Execute basic data access operation on caches.

The need for out-of-process caching


Traditional caching improves performance by
storing data close to the application. Instead of
executing a long and expensive data access
transaction to a database, a simple and fast
memory lookup can fetch the data.
Storing data in an in-memory cache is simple and
efficient yet it assumes that the application and
the in-memory cache are located on the same
machine.
Large-scale applications run on multiple servers.
Assumptions concerning the identity of the
execution machine simply break in such scenarios.
Load balancers distribute requests across execution servers so clients do not know which servers will
handle their request. The first request of a business transaction might reach server A but the second
request of the same transaction might be handled by server B. Data stored in-memory on one server is
unavailable to other machines so storing data in-memory will be useless when requests span over multiple
servers.

In high-scale scenarios, you have to store data in an independent data store that is accessible to all
computers that may request for it. One option is to store the data in the database, but each data access
will suffer from long delays. Another solution is to create a dedicated server for storing data in-memory
for all other execution machines. A single server is limited in its memory capacity and is unreliable by
design. Highly scalable applications often store much more memory then a single machine can handle
and cannot afford a single point of failure.
The solution is a distributed cache that spans over multiple servers. Data is stored in memory on multiple
machines so the cache can grow in size and in transactional capacity. However, clients work against a
single logical cache without knowing where the data is actually stored.

Caches can be useful to store temporary data. All data items in the cache are automatically removed
according to expiry periods and cleanup policy. The developer is free from handling garbage collection of
MCT USE ONLY. STUDENT USE PROHIBITED
7-40 Implementing Data Storage in Azure

unnecessary data stored in the cache. Applications can store intermediate data in the cache, use it in their
calculations, and then forget about it. It will be automatically cleaned.

Distributed caches simplify the execution of parallel tasks across servers in high-performance computing
or map-reduce applications. A complex job can be divided into simpler tasks, distributed across servers
and executed in parallel. Intermediate results produced by such tasks can be stored in the cache before
being used by other tasks in the execution flow.
If data reliability is required, you can use replication and store the same data on multiple cache servers. If
one server fails the data will be still available.

With distributed cache, you can improve the performance of high-scale applications that span multiple
servers. Distributed cache is as simple to use as traditional in-memory cache but can grow in size
according to demand and can serve multiple applications simultaneously.

Applications such as ASP.NET websites deployed on a web farm with multiple servers can store their
session state in a distributed cache and gain fast data access across the web farm as well as automatic
cleanup.

Features to look for in a cache solution


Distributed cache is an in-memory data store
Some distributed cache solutions can be used as
message brokers. They also support master-slave
replication, where a copy of each cached object is
stored on a secondary host in case of a primary
failure and also supports optional persistence to
disk by periodically dumping out the data store or
appending each command to a log.
You can store strings, lists, sets, sorted sets, and a
variety of additional data structures. It has
transaction support so you can perform multiple
operations automatically, and some operations
can be inherently atomic, such as appending a value to a string or inserting a value to a list. The
distributed cache also supports scripting, line-replaceable unit (LRU) eviction of keys, time-based key
expiration, and a publish/subscribe model (message broker).
A cache cluster can be used to run multiple cache nodes that automatically shard data and can continue
operating even in the face of a partition or when some of the nodes are experiencing failure. A client can
connect to the cluster and treat it as a single node for the purpose of storing and retrieving objects, but
data is split and replicated behind the scenes.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-41

On-premises caching solutions


There are several cache solutions that can be used
on premises:

• Redis is an open-source popular solution with


a lot of features which evolve frequently.in
the next topic we discuss how to use Azure
Cache for Redis which based on the Redis
solution.
• Memcached is also a popular solution but
with fewer features, it’s a more basic solution
compared to Redis but it has been in use for a
long time.

Azure caching solutions – Azure Cache for Redis


Azure Cache for Redis is a managed
implementation of the open source Azure Cache
for Redis, which can be accessed remotely or by
any application within Azure. Being managed, it is
a service that you don’t need to install, configure,
or update – it is fully hosted and managed by
Azure.
Azure Cache for Redis has multiple pricing tiers:

• Basic. A single node cache with multiple


cache sizes up to 53 GB.
• Standard. A two-node cache configuration
with a primary cache and its replica, with multiple cache sizes up to 53 GB.
• Premium. A two-node cache configuration with a primary cache and its replica, with multiple cache
sizes up to 530 GB, and additional features.

If you are using the Premium pricing tier, you can create clusters that consist of more than 53 GB
individual caches, and shard data across multiple Redis nodes. You can also configure persistence to
persist your cache to a Storage account, achieving resiliency and faster startup times so the cache is
immediately populated.

To create a new Azure Cache for Redis, you can use the Azure portal, Azure Resource Manager templates,
Azure PowerShell, or the Azure CLI. You can subsequently configure the cache by using all of these
methods as well. The following screenshot demonstrates the new cache configuration dialog box in the
Azure portal, which you can find under + Create a resource > Databases > Redis Cache. To submit a
request for cache creation, which is performed within minutes, click Create.

The following image shows the New Redis Cache blade for creating a new Azure Cache for Redis.
MCT USE ONLY. STUDENT USE PROHIBITED
7-42 Implementing Data Storage in Azure

FIGURE 7.19: A SCREENSHOT OF THE NEW REDIS CACHE BLADE.


MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-43

To access Azure Cache for Redis, use the same Azure Cache for Redis client API as for an on-premises
cache. In the next topic, “Using Azure Redis Cache from Code,” we will use the StackExchange.Redis
NuGet package to access the Azure-hosted Cache for Redis.

To learn more about Azure Redis Cache, refer:t

https://aka.ms/moc-20487D-m7-pg29

Demonstration: Using Microsoft Azure Redis Cache for Caching Data


In this demonstration, you will learn how to create Azure Redis Cache in Azure portal and cache data in
ASP.NET Core services.

Demonstration Steps
You will find the steps in the “Using Microsoft Azure Redis Cache for Caching Data“ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD07_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
7-44 Implementing Data Storage in Azure

Lab C: Caching Out-of-Process with Azure Redis Cache


Scenario
In this lab, you will cache out of process with Azure Redis Cache.

Objectives
After completing this lab, you will be able to:

• Create the Azure Redis Cache service.

• Access the cache service from code.

Lab Setup
Estimated Time: 30 minutes

You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD07_LAB_MANUAL.md.

You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD07_LAK.md.

Exercise 1: Create the Azure Redis Cache Service


Scenario
Create Azure Redis Cache in Azure portal.

Exercise 2: Access the Cache Service from Code


Scenario
Implement ASP.NET Core service that cache data in Azure Redis cache.

Exercise 3: Test the Application


Scenario
In this exercise you will test the application by running it multiple times and verifying you are getting
cached data
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 7-45

Module Review and Takeaways


In this module, you learned about Storage and how to create storage accounts. You learned what Blob
storage is, and how to use it to manage files in the cloud. You learned also about using SQL database and
Azure Cosmos DB to store and manipulate structured data. And you also learned about data scalability
options with Content Delivery Network and Azure Cache for Redis.

Review Question
Question: You have been approached by an online educational organization and asked to
design an application for tracking student activity. How would you use Storage for this task?
MCT USE ONLY. STUDENT USE PROHIBITED
7-46 Implementing Data Storage in Azure
MCT USE ONLY. STUDENT USE PROHIBITED
8-1

Module 8
Monitoring and Diagnostics
Contents:
Module Overview 8-1

Lesson 1: Logging in ASP.NET Core 8-2

Lesson 2: Diagnostic Tools 8-11


Lab A: Monitoring ASP.NET Core with ETW and LTTng 8-23

Lesson 3: Application Insights 8-24


Lab B: Monitoring Azure Web Apps with Application Insights 8-38
Module Review and Takeaways 8-39

Module Overview
In the real world, most application failures often occur only in production environments and not on the
developer’s machine. Understanding why applications fail and obtaining as much information as possible
from the runtime environment is of paramount importance to operations engineers and developers
looking to resolve bugs or understand application performance. Additionally, security concerns frequently
require collecting audit information from production machines for accountability and analysis purposes.
This module discusses tracing, with a focus on web service tracing and on auditing technologies provided
by Microsoft Azure. The module begins with tracing in the Microsoft .NET Framework by using
System.Diagnostics, and then describes tracing in web service infrastructures such as Windows
Communication Foundation (WCF) and Microsoft ASP.NET Web Application Programming Interface (API).
Finally, it explains the information you can get from the host with Microsoft Internet Information Services
(IIS), as well as Azure monitoring and diagnostics.

Note: The portal UI and Azure dialog boxes in Microsoft Visual Studio 2017 are updated
frequently when new Azure components and SDKs for .NET are released. Therefore, it is possible
that some differences will exist between screenshots and steps shown in this module, and the
actual UI you encounter in the portal and Visual Studio 2017.

Objectives
After completing this module, you will be able to:
• Perform tracing in the .NET Framework with the System.Diagnostics namespace.

• Configure and explore web service and IIS tracing.

• Monitor services by using Azure Diagnostics.


• View and collect Azure metrics in the Azure portal.
MCT USE ONLY. STUDENT USE PROHIBITED
8-2 Monitoring and Diagnostics

Lesson 1
Logging in ASP.NET Core
The most common type of diagnostic data you can expect from a production system is logs. There are
numerous ways to emit log messages (or traces) and many ways to format, store, and analyze them; many
of you know the feeling of chasing a bug through thousands of lines of logs. Later in this module, we will
discuss some alternative approaches to monitoring and diagnostics, which do not require parsing
extensive log messages from your application. Nonetheless, you can find and fix some problems only by
carefully checking logs and traces and correlating them to issues in the application code and
configuration.

In this lesson, we will explore the ASP.NET Core logging framework, which is easy to use, extensible, and
ships with a large number of built-in logging providers. We will emit logs to various providers and learn
how to stream diagnostic logs from ASP.NET Core services that run in the Azure App Service.

Lesson Objectives
After completing this lesson, you will be able to:
• Explain how to emit log messages from various ASP.NET Core application components.

• Explain how to configure logging levels, categories, scopes, and structured logs.
• Write messages to various logging providers, including Event Tracing for Windows (ETW).
• Use third-party logging providers with the ASP.NET Core logging API.

• Describe streaming diagnostic logs from an application that run in the Azure App Service.

Overview of the logging framework


To write log messages by using the ASP.NET Core
logging API, you need an object that can
implement the ILogger interface. This is the
object that you use to write logs. This object
provides a variety of extension methods that
include:

• LogCritical, LogError, LogWarning,


LogInformation, LogDebug, LogTrace.
Helper methods that log messages at
different log levels. (Log levels are discussed
in Topic 2, “Advanced Logging
Configuration.”)

• IsEnabled. Returns whether a specific log level is enabled, so that you can avoid generating expensive
log data if the log message will not actually be written anywhere.
• BeginScope. Begins a logical operation scope, which can group associated logs and make them
easier to understand later. (Scopes are also discussed in Topic 2, “Advanced Logging Configuration.”)

To obtain an ILogger object, you can use an ASP.NET Core dependency injection. This is the easiest way
and it works very well for controller methods that need a logger. A more advanced approach is to obtain
an ILoggerFactory object, which you can use to configure where your logs are written, and then create
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-3

an ILogger object from the logger factory. You can use an ASP.NET Core dependency injection to obtain
the logger factory, or create a new LoggerFactory object that implements the ILoggerFactory interface.

The following code shows an ASP.NET Core controller that uses dependency injection to obtain an
ILogger object:

Using dependency injection to obtain an ILogger object


public class FlightsController : Controller
{
private readonly ILogger _logger;

public FlightsController(ILogger<FlightsController> logger)


{
_logger = logger;
}

public IActionResult GetById(string flightId)


{
_logger.LogInformation("Retrieving flight {id}", flightId);
Flight = …; // Retrieve the flight by id
if (flight == null)
{
_logger.LogWarning("Unable to find flight {id}", flightId);
return NotFound();
}
return new ObjectResult(flight);
}
}

In the preceding example, an ILogger object is injected to the FlightsController constructor by the
ASP.NET Core dependency injection infrastructure. Then, the LogInformation and LogWarning methods
are used to write log messages. Note that the format string passed to these methods is not a standard
String.Format format string (for example, "id = {0}"), and it’s not a C# interpolated string either (for
example, $"id = {id}"). It is a custom format used by the ASP.NET Core logging API.
The generic type parameter of the ILogger object injected to the FlightsController constructor specifies
the logger’s category, which you can use to easily parse all the log messages from a specific area in the
application’s code. You could also inject the non-generic ILogger interface, which would be associated
with a default category. Categories are discussed in Topic 2, “Advanced Logging Configuration.” Note that
you can assign ILogger<T> to ILogger, and use the ILogger object (as in the above example). The
generic type parameter is only used to determine the logger’s category.

Note: ASP.NET Core will write its internal logs to the logger factory that it creates internally,
even if you create additional logger factories. If you want to create your own logger factory and
have ASP.NET Core write its logs to it, you’ll need to call ApplicationLogging.ConfigureLogger
with your logger factory object, and then set the AppplicationLogging.LoggerFactory property
to the same object.

The following code shows an ASP.NET Core controller that creates its own ILogger object from an
injected ILoggerFactory object:

Creating ILogger from a dependency injected ILoggerFactory


public class FlightsController : Controller
{
private readonly ILogger _logger;

public FlightsController(ILoggerFactory factory)


{
MCT USE ONLY. STUDENT USE PROHIBITED
8-4 Monitoring and Diagnostics

_logger = factory.CreateLogger(
"BlueYonder.Flights.FlightsController");
}

// … The rest of the code is unchanged


}

Advanced logging configuration


Each log message you write is associated with a
log level. The code examples in the previous topic
show the extension methods for writing logs at
various levels such as LogInformation and
LogWarning. Here is a complete list of log levels
in the order of severity: Critical, Error, Warning,
Information, Debug, and Trace. In many systems,
log levels below the information will not be
enabled by default because of the high log
volume.

For more detailed information on log


levels, and recommendations for when to
use which, refer to the LogLevel enumeration documentation at:

https://aka.ms/moc-20487D-m8-pg1

Each log message that you write is associated with a log category. The category comes from your ILogger
object, and you can specify it when you create the logger with ILoggerFactory.CreateLogger. By
convention, the category is the fully-qualified name of the class writing the logs. As explained in the
previous topic, you can have ASP.NET Core inject an appropriately configured logger object by accepting
a constructor parameter of the generic type ILogger<T>. For example, if your constructor accepts a
parameter of type ILogger<FlightsController>, ASP.NET Core will create and an inject a logger
configured with a category equal to the fully qualified name of the FlightsController class.
Each log message you write is associated with an event ID. The various Log… methods (such as
LogInformation) have overloads that accept an event ID as the first argument. An event ID is an integer
value that you can freely assign, and it serves the purpose of associating related log events together. For
example, a log message for completing a new flight reservation can have log ID 4000, and a log message
for canceling a flight reservation can have log ID 4001. Using event IDs makes automatic event parsing
easier for log processors and business intelligence tools.

Finally, each log message that you write can use a message template. You can also use plain strings that
you format yourself, but it is recommended to use a template that contains placeholders for useful but
variable data, such as flight numbers, reservation identifiers, and hotel addresses. The key benefit of using
a template is that you can store the variable data separately from the string message, and analyze the
data without parsing the complete string. This makes filtering, sorting, and various aggregations much
easier for log processing and analysis tools. In Lesson 2, “Diagnostic Tools,” Topic 3, “Overview of Event
Tracing for Windows (ETW),” we will discuss the value of semantic logging, where a log entry is not just a
plain string, but a structured payload. By specifying event IDs and message templates, you can use
structured (semantic) logging with any log provider, and not just ETW.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-5

The following code shows how to use event IDs and message templates to emit user-friendly but also
machine-parsable structured logs:

Adding structured logs to a service


private const int FlightReservationCancelled = 4001;
private const int FlightReservationCancellationRequested = 4002;
private const int FlightReservationNotFound = 4003;

public IActionResult CancelReservation(string reservationId)


{
_logger.LogInformation(FlightReservationCancellationRequested,
"Requesting cancellation of flight {res}", reservationId);

Flight = …;
if (flight == null)
{
_logger.LogWarning(FlightReservationNotFound,
"Reservation {res} could not be found", reservationId);
return NotFound();
}

_logger.LogInformation(FlightReservationCancelled,
"Successfully cancelled reservation {res}", reservationId);
return Ok();
}

The ASP.NET Core logging API also offers logging scopes. In many cases, you have a set of log messages
associated with a single logical activity in your application, such as booking a flight or canceling a hotel
reservation. A logging scope aggregates log messages together, and if you use an appropriate logging
provider and log viewer, makes it easier for you to see the logical structure and hierarchy of logs in the
same scope. You can also use a logging scope to attach the same set of contextual information, such as a
request ID or transaction ID, to all logs in the same scope.
To create a logging scope, use the ILogger.BeginScope method, which returns an IDisposable object.
Logs written inside the scope are associated with the scope, until you dispose it. Logging scopes can be
nested, and in fact ASP.NET Core creates a logging scope for each controller method call, which includes
the request identifier, request path, and the name of your controller’s method.
The following code shows how to use a logging scope to aggregate multiple log messages under a single
scope:

Aggregating multiple log message under a single scope


using (_logger.BeginScope($"flight {id}"))
{
// Log messages inside the scope are associated with the context
// string, such as "flight BY97".

int seatsAvailable = …;
_logger.LogInformation(
"Seats available at requested fare: {seats}", seatsAvailable);

if (seatsAvailable <= 0)
{
_logger.LogWarning("No seats available");
}
// … More code
}
MCT USE ONLY. STUDENT USE PROHIBITED
8-6 Monitoring and Diagnostics

Built-in logging providers


Logging providers are responsible for processing
your log messages and sending them to their final
destination, which can be a display, a storage
system, or a distributed log processing facility.
ASP.NET Core ships with several built-in logging
providers, and you can plug in external providers
easily. External providers are discussed in Topic 4,
“Using External Logging Providers.”
You add logging providers to your ASP.NET Core
application by using the ILoggingBuilder object
(which configures the ILoggingFactory object;
you can also access the logging factory directly). It
has extension methods for the built-in logging providers, and when you install external providers, they
also offer extension methods for registration. For example, the AddConsole extension method adds
logging to the console, with a specified log level. Likewise, AddEventLog adds logging to the Windows
Event Log, with a specified log level. The built-in providers are:
• Console. This provider logs messages to the console.
• Debug. This provider logs messages to Debug.WriteLine.

• EventSource. This provider logs messages to ETW.

• EventLog: This provider logs messages to the Windows Event Log.


• TraceSource: This provider logs messages to System.Diagnostics trace listeners.

• Azure App Service. This provider logs messages to files in the web apps file system in Azure App
Service. It logs messages to Azure Storage blobs if in a configured Azure Storage account.

The following code shows how to add built-in logging providers to your ASP.NET Core application by
configuring the web host builder in the application’s startup code:

ASP.NET Core logging configuration


var host = new WebHostBuilder()
.ConfigureLogging(loggingBuilder =>
{
loggingBuilder.AddConsole(LogLevel.Debug);
loggingBuilder.AddEventLog(LogLevel.Warning);
loggingBuilder.AddEventSource();
})
// … Other host configuration steps
.Build();

host.Run();

The following is an example of the output from a console provider when it runs one of the code examples
from the previous topic:

Logging output example


info: BlueYonder.Flights.FlightsController[4001]
Requesting cancellation of flight AB456FG
warn: BlueYonder.Flights.FlightsController[4003]
Reservation AB456FG could not be found
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-7

Note: You can also use the ILoggingBuilder.AddConfiguration method to read log
configuration settings from a configuration file, instead of specifying the logging providers and
levels in code.

Demonstration: Recording logs to the Console and EventSource providers


In this demonstration, you will create a new ASP.NET Core web API project and show how to get ILogger
messages to show up in the console and the EventSource listeners.

Demonstration Steps
You will find the steps in the “Demonstration: Recording logs to the Console and EventSource providers“
section on the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-
Azure-and-Web-Services/blob/master/Instructions/20487D_MOD08_DEMO.md.

Using external logging providers


One of the key tenets of the ASP.NET Core
logging API is that it can be extended by third-
party providers without requiring changes to the
infrastructure code. The providers for third-party
logging frameworks that integrate with ASP.NET
Core include:
• Serilog. A logging provider for the Serilog
library, which implements semantic logging.
• NLog. A logging provider for the popular
NLog logging library.

• elmah.io. A logging provider for the Elmah


web application diagnostic service.
Serilog is a powerful logging library that can write log messages to a variety of sinks, including files, the
console, and various cloud services. It was designed for structured (semantic) logging so that each
message is formatted by using a template with variable parameters, similar to ASP.NET Core logging
message templates. The structured data can be easily recorded as JSON objects to numerous databases,
log processors, streaming services, analyzers, and storage systems.

To learn more about the Serilog library, go to:


https://aka.ms/moc-20487D-m8-pg2

To learn more about Serilog’s log sinks, go to:


https://aka.ms/moc-20487D-m8-pg3

To configure Serilog for your ASP.NET Core application, install the Serilog.AspNetCore NuGet package
and some additional packages based on the sinks that you want to use. For example, the console sink is in
the Serilog.Sinks.Console NuGet package. Then, you call the UseSerilog method in your web host
builder’s configuration and configure Serilog’s sinks, in turn.
MCT USE ONLY. STUDENT USE PROHIBITED
8-8 Monitoring and Diagnostics

The following code shows how to configure Serilog with the ASP.NET Core web host builder:

Configuring Serilog as logger of ASP.NET Core


var host = new WebHostBuilder()
.UseSerilog((context, config) =>
{
config.ReadFrom.Configuration(context.Configuration)
.Enrich.FromLogContext()
.WriteTo.Console();
})
// … Other host configuration steps
.Build();

host.Run();

To learn more about Serilog’s integration with ASP.NET Core, go to:


https://aka.ms/moc-20487D-m8-pg4

Demonstration: Using Serilog


In this demonstration, you will use the Serilog logging middleware to plug events into Serilog.

Demonstration Steps
You will find the steps in the “Demonstration: Using Serilog“ section on the following page:
https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD08_DEMO.md.

Streaming diagnostic logs in Azure Web Apps


When you host your ASP.NET Core web
application or web service in Azure App Service, it
can use the ASP.NET Core Azure App Service
logging provider. This provider is in the
Microsoft.Extensions.Logging.AzureAppServic
es NuGet package, and it writes logs to plain text
files in the web app’s file system in Azure App
Service, as well as to an Azure Storage blob in an
Azure Storage account. (For more information
about Azure Storage, refer to Module 7,
“Implementing Data Storage in Azure,” Lesson 2,
“Accessing Data in Azure Storage.”) If you’re using
the full .NET Framework, you need to install the provider’s NuGet package and call the
AddAzureWebAppDiagnostics on your logging configuration. If you’re using .NET Core, this is
done automatically on your behalf when the application is hosted in Azure App Service.

For more information on the Azure App Service logging provider, go to:
https://aka.ms/moc-20487D-m8-pg5
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-9

To configure logging to the file system and/or to an Azure Storage blob, you use the Diagnostic logs
pane under the MONITORING section of your Azure App Service’s settings on the Azure portal. Your
changes are applied immediately, and you don’t have to restart the application to get access to the logs.

The following screenshot shows the Diagnostics logs pane on the Azure portal. You can use the
Diagnostic logs pane to enable application logging to the file system or Azure Storage blobs:

FIGURE 8.1: THE DIAGNOSTICS LOGS PANE ON THE


AZURE PORTAL
After you enable diagnostic logs, you can retrieve them from the application’s file system (for example,
through FTP) or from the Azure Storage blob container. Alternatively, you can view the logs in real-time
by using the Azure portal’s Log stream pane under the MONITORING section of your web application in
Azure App Service. You can also view the streaming logs by using Windows PowerShell or Azure
Command-Line Interface (CLI).
The following screenshot shows the log files from the application’s file system, as seen in the Console tool
in the Azure portal:

FIGURE 8.2: THE LOG FILES FROM THE


APPLICATION’S FILE SYSTEM
MCT USE ONLY. STUDENT USE PROHIBITED
8-10 Monitoring and Diagnostics

The following screenshot shows the Log stream pane in the Azure portal:

FIGURE 8.3: THE LOG STREAM PANE IN THE AZURE


PORTAL
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-11

Lesson 2
Diagnostic Tools
Understanding the performance profile and behavior of your web services is critical for successful testing
and production deployments. Backend services that do not perform well cause upstream problems for
other services that depend on them, and user-facing services that don’t perform well cause immediate
customer frustration with your system. The .NET runtime provides a variety of performance and diagnostic
information that you can use in development and testing to improve the performance of your service. You
can also use .Net runtime diagnostic information in production environments to monitor the health of
your service and respond accordingly.

In this lesson, we will explore the performance diagnostic facilities built into ASP.NET, IIS, and .NET Core
across both major operating system platforms: Windows and Linux. We will see how to monitor
application performance in production and record performance traces that can be analyzed later.

Lesson Objectives
After completing this lesson, you will be able to:
• Explain the key benefits and use cases for Windows performance counters.

• Explain how to Collect and monitor IIS and ASP.NET performance counters.
• Explain the architecture of ETW.

• Record and analyze .NET-related ETW events.

• Record and analyze .NET Core LTTng events on Linux.

Overview of performance counters


Performance counters are the fundamental
Windows mechanism for high-level performance
monitoring and diagnostics. The infrastructure
that supports performance counters is part of the
Windows operating system. Numerous Windows
components use performance counters to report
performance information, such as processor
utilization, disk reads and writes, network
connections established, and available memory.
Applications and applications frameworks on
Windows also use performance counters to report
performance information. For example, IIS
provides numerous performance counters for monitoring the number of active connections and IIS cache
behavior. Likewise, the .NET Framework (but not .NET Core) provides performance counters for viewing
.NET memory usage, lock contention and thread utilization, and exception rates.
MCT USE ONLY. STUDENT USE PROHIBITED
8-12 Monitoring and Diagnostics

Performance counters are organized in a simple hierarchical structure. Performance objects or


performance categories can have multiple performance counters embedded in them. Each performance
counter is a single numeric value, which can be an absolute quantity (such as the number of exceptions
raised by an application since it was started), a rate (such as the number of bytes transferred per second,
on average), or a percentage (such as the proportion of available memory on the system). Additionally,
most performance categories have multiple instances, where each instance contains copies of the counters
belonging to that category. To make this more intuitive, think of performance categories as C# classes,
performance counters as individual properties, and instances as class instances (objects).

Here are some examples of useful Windows performance counters (more performance counters will be
discussed in Topic 2, “ASP.NET and IIS performance counters”):

• Process\% Processor Time

• Physical Disk\Current Disk Queue Length

• Memory\Available MBytes

• Network Interface\Bytes Received Per Second

To view and record performance counters, you can use the built-in Windows Performance Monitor
(perfmon). It can show the current values of the performance counters you specify, record them to a file
for later viewing, and open existing recordings. When recording performance counters, you can use
various file formats such as simple Comma-Separated Values (CSV), which you can easily import to
Microsoft Excel and similar software.

The following is a screenshot of the main Performance Monitor window, which is monitoring a few
performance counters:

FIGURE 8.4: THE MAIN PERFORMANCE MONITOR


WINDOW

To learn more about recording performance counters in Performance Monitor, go to:


https://aka.ms/moc-20487D-m8-pg6
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-13

ASP.NET and IIS performance counters


The .NET Framework, ASP.NET, and IIS are good
examples of higher-level components that are not
part of the Windows kernel, but can still be
monitored by using performance counters.
Numerous monitoring solutions for web
applications and services that run on Windows
collect information from these performance
counters., You can view and record these
performance counters by using Performance
Monitor. Just like lower-level performance
counters, collecting this information can help
begin an investigation and point you in the
direction of the component or subsystem exhibiting the problem. This can help in capacity planning and
load testing to evaluate your system’s performance and scalability. Combining low-level performance
counters, such as CPU utilization and network bandwidth, with high-level performance counters, such as
.NET exceptions or ASP.NET requests processed, helps trace performance issues through all the layers of
your web application.
Some of the useful ASP.NET performance counters are:

• ASP.NET\Requests Current
• ASP.NET\Application Restarts

• ASP.NET Apps\Errors Total

• ASP.NET Apps\Request Bytes In Total, Request Bytes Out Total


• ASP.NET Apps\Requests/Sec

For a complete list of ASP.NET performance counters, go to:


https://aka.ms/moc-20487D-m8-pg7

Some of the useful .NET performance counters are:


• .NET CLR Memory\# Bytes in all Heaps

• .NET CLR Memory\% Time in GC

• .NET CLR Exceptions\# of Exceps Thrown

• .NET CLR LocksAndThreads\Contention Rate / sec

• .NET CLR LocksAndThreads\# of current physical Threads

For a complete list of .NET performance counters, go to:


https://aka.ms/moc-20487D-m8-pg8

Some of the useful IIS performance counters are:


• WWW Service\Current Connections

• WWW Service\Bytes Sent/sec


• WWW Service\Bytes Received/sec
MCT USE ONLY. STUDENT USE PROHIBITED
8-14 Monitoring and Diagnostics

For a list of additional IIS performance counters, go to: https://aka.ms/moc-20487D-m8-pg9

Because they are Windows-specific, performance counters are not supported by .NET Core. It means that
if you use the cross-platform .NET Core runtime, you will not see .NET-specific performance counters
(such as GC behavior) exposed from your application process. However, if you’re running on Windows,
you can switch your ASP.NET Core or .NET Core application to use the full .NET Framework runtime, which
will expose the traditional .NET performance counters. To do so, you only need to change the target
framework for your main project.

You can create custom performance counters in your own application code by using the
PerformanceCounterCategory and PerformanceCounter classes from the System.Diagnostics
namespace. You can use these classes to augment the set of performance monitoring data collected from
the system with application-specific insights that might further help pinpoint the problem. You can also
use the PerformanceCounter class to programmatically read performance counter values from system
counters, which can be used for self-diagnostics and reporting.

Note: In late 2017, performance counter support (including the relevant classes in
System.Diagnostics) was merged into .NET Core. As a result, you can now use the Windows
Compatibility Pack to add performance counter support to .NET Core applications. However, this
does not provide support for performance counters on non-Windows platforms.

For more information about the .NET Core Windows Compatibility Pack, and how it can be
used to help porting Windows applications to .NET Core, go to:
https://aka.ms/moc-20487D-m8-pg10

The following code example shows how to create a new performance category with two performance
counters, and then update them from the application code:

Creating performance counters


var reservations = new CounterCreationData(
"# Reservations Booked",
"The number of reservations booked by the flights system",
PerformanceCounterType.NumberOfItems32);
var overbookings = new CounterCreationData(
"# Flights Overbooked",
"The number of flights overbooked today",
PerformanceCounterType.NumberOfItems32);

var counters = new CounterCreationDataCollection();


collection.Add(reservations);
collection.Add(overbookings);

PerformanceCounterCategory.Create(
"Blue Yonder Flights",
"Counters for the Blue Yonder flights reservation service",
counters);

var reservationsCounter = new PerformanceCounter(


"Blue Yonder Flights", "# Reservations Booked", readOnly: false);
reservationsCounter.RawValue = 42;

The preceding code example adds two performance counters with help descriptions to a single category
called “Blue Yonder Flights,” and then creates that category. After successfully creating the category, the
code updates the first counter to a specific value.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-15

For more information on the PerformanceCounter class, go to the MSDN documentation:


https://aka.ms/moc-20487D-m8-pg11

The .NET Core runtime does not support performance counters because they are a Windows-only
mechanism. If you need a similar mechanism that would work across all the platforms supported by .NET
Core, you should use event counters. Event counters are similar to event sources, but they provide just a
single counter value. Later in this lesson, you will learn about ETW and LTTng, which are the
implementation libraries behind event counters and event sources.

To learn more about event counters, refer to the event counter tutorial at:
https://aka.ms/moc-20487D-m8-pg12

Overview of ETW
As we have seen earlier in this lesson, obtaining
diagnostic data about a running system or
application is critical for its proper development,
testing, and operation. Performance counters are
a valuable tool in getting diagnostic data about
your system, but they can’t cover certain
scenarios. Specifically, performance counters are
not a good fit if:
• You need high-resolution data about events
that happen at a high frequency, such as
individual HTTP requests or individual
exceptions thrown. Performance counters can
only provide an aggregate.
• You need additional information about events that are more than just a single numeric value. For
example, you need the URLs of individual HTTP requests or the names of frequently accessed disk
files. Performance counters can only provide numeric information.
ETW is a Windows operating system component that is implemented in the Windows kernel. It is a high-
performance event tracing framework designed for rates of tens of thousands of events per second, with a
reasonable sustained CPU overhead. Numerous Windows components and higher-level application
frameworks (including .NET Core, Task Parallel Library, and IIS) are instrumented with ETW support, and
can provide diagnostic data about their internal operations by using ETW events.

ETW events have a well-defined structured payload, which is one of the key differences between them
and plain log messages. For example, instead of emitting a log message such as “Received new flight
reservation EWR-YYZ on flight BY 005, fare class W, passenger name Mr. David Smith” as a plain string,
you would define an event payload called NewFlightReservationEvent with the following fields, and
emit it through the ETW infrastructure:
MCT USE ONLY. STUDENT USE PROHIBITED
8-16 Monitoring and Diagnostics

• Airline (string) = “BY”

• Number (int) = 5
• Origin (string) = “EWR”

• Destination (string) = “YYZ”

• Fare class (enum FareClass) = FareClass.W


• Passenger name (string) = “Mr. David Smith”

As a result, it is very easy for tools to parse ETW events and understand their contents, which helps with
filtering, sorting, aggregation, and other tasks that are difficult to perform on unstructured log data
without having to parse and interpret it first. This paradigm is called structured logging, or semantic
logging, and it is becoming more and more common in modern tracing frameworks that are designed for
producing and retaining large amounts of trace data for subsequent analysis.

Note: You can design and provide application-level ETW events by using the EventSource
class, which was discussed in Topic 4, “.NET-related ETW events.” Recording application-level
events alongside with system events can help diagnose complex problems by tracing data flow
and events throughout your application stack.

The key components in the ETW architecture are the following:

• Providers. ETW providers emit events with well-defined, structured, and discoverable payloads. The
events are not stored or copied anywhere by default; a provider has to be enabled for tracing to
occur.
• Sessions. ETW sessions store events written by providers in a set of buffers, which can be directed to a
file on disk or discarded when the buffer becomes full.

• Controllers. ETW controllers create a session, and then enable specific providers to write events into
the session. A provider may write events into more than one session.

• Consumers. ETW consumers process ETW events. Events can be processed from an on-disk file (.etl,
Event Trace Log) or a real-time memory buffer to which they are written by one or more providers.

In many cases, you will use ETW to record a set of events to a file, and then open that file with dedicated
analysis tools. However, it can also be very useful to process ETW events in real-time, without having to
record them to a file. This enables continuous monitoring and aggregation without the additional
overhead of writing high-frequency events to disk. Numerous monitoring frameworks (including
Application Insights, discussed in Lesson 3, “Application Insights”) use ETW behind the covers to
implement accurate low-overhead instrumentation and diagnostics.

Some common tools that you will use when working with ETW are:

• PerfView. An open source multi-tool that can be used to record and analyze ETW events, supports
table and flame graph visualizations, and understands a variety of event formats. PerfView is
discussed in Topic 4, “.NET-related ETW events.”
• Windows Performance Analyzer. A graphical tool that reads and analyzes .etl files, and supports
multiple types of advanced visualizations.

• Windows Performance Recorder. A combination GUI/console tool that records ETW events to a file
based on a configuration that you specify.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-17

Note: When recording an ETW event, you can also capture the application call stack that
led to the generation of this event. For many types of events, the call stack is an extremely
valuable piece of information. For example, consider an event generated when a file is written to
disk: having just the event data would be useful, but knowing where in the application code the
file was being written can be even more useful.

.NET-related ETW events


The .NET runtime is instrumented with numerous
ETW events that can be used for carrying out
various kinds of analyses. These events are richer
than the .NET performance counters discussed in
Topic 2, “ASP.NET and IIS performance counters.”
Recording and aggregating these events is behind
the implementation of numerous successful
profiling tools, including the Visual Studio profiler,
Application Insights, and Visual Studio
Concurrency Visualizer.

Some useful .NET Framework common language


runtime (CLR) events include the following:
• ExceptionThrown. Emitted when an exception is thrown, and includes the exception type and
exception message.

• AssemblyLoad. Emitted when a .NET assembly is loaded, and includes the assembly name, version,
and load path.

• GCStart, GCEnd. Emitted when a garbage collection starts and ends, and includes the generation
being collected and the GC reason.
• ContentionStart, ContentionStop. Emitted when a managed thread starts to wait for a lock and
when the thread acquires the lock. This event includes the lock being waited for and the waiting
thread.

• GCAllocationTick. Emitted for every 100KB (approximately) of allocated memory, and includes the
type of the last allocated object and the amount of allocated memory.

For a complete list of CLR ETW events, go to: https://aka.ms/moc-20487D-m8-pg13

You can include ETW events in your application by using the EventSource class in the
System.Diagnostics.Tracing namespace, which is available in .NET Core and in the full .NET Framework
(as of .NET 4.5). This class handles the low-level details of interacting with the operating system, and
provides a clean API for defining the event payload and writing events with a minimal effort. What’s more,
if you use the EventSource class in a .NET Core application, it will automatically use ETW when running
on Windows, and LTTng (discussed in Topic 5, “LTTng events in .NET Core on Linux”) when running on
Linux.
The following code example defines a set of ETW events by using the EventSource class, and then emits
them from the application code:
MCT USE ONLY. STUDENT USE PROHIBITED
8-18 Monitoring and Diagnostics

C# definition of ETW events


class FlightQueriesEventSource : EventSource
{
public void QueryStarted(
int id, DateTime when, string origin, string destination)
{
WriteEvent(1, id, when, origin, destination);
}

public void QueryCompleted(int id, int resultCount)


{
WriteEvent(2, id, resultCount);
}

public static readonly FlightQueriesEventSource Log =


new FlightQueriesEventSource();
}

// In the application controller:


FlightQuery query = …;
Guid queryId = Guid.NewGuid();
FlightQueriesEventSource.Log.QueryStarted(queryId,
query.DepartureDate, query.Origin, query.Destination);
// Query the database etc.
List<FlightInfo> results = …;
FlightQueriesEventSource.Log.QueryCompleted(queryId, results.Count);

In the preceding example, the FlightQueriesEventSource class derives from EventSource, and defines
two public methods called QueryStarted and QueryCompleted. These methods and their parameters
automatically form the structured event payload for two events. Finally, the application code needs to call
only these methods because the underlying ETW infrastructure is handled by the
EventSource.WriteEvent method.

To learn more about the EventSource class, go to: https://aka.ms/moc-20487D-m8-pg14

To record .NET ETW events in PerfView, you can use the Collect > Run or Collect > Collect menu items.
To record custom providers from your application, you need to specify their names in the Additional
Providers box. After collecting events, you can view them using PerfView’s rich reporting facilities, which
include general statistics (such as garbage collection events) and individual event data.

The following screenshot depicts PerfView’s main collection dialog box, where you specify which events
you’d like PerfView to record:

FIGURE 8.5: PERFVIEW’S MAIN COLLECTION


DIALOG BOX
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-19

The following screenshot depicts PerfView’s main window after expanding the recording performed in the
previous step:

FIGURE 8.6: PERFVIEW’S MAIN WINDOW AFTER


EXPANDING THE RECORDING PERFORMED
The following screenshot depicts PerfView’s Events window, which displays all the different events
recorded by the tool, including custom application events that you specified:

FIGURE 8.7: PERFVIEW’S EVENTS WINDOW


MCT USE ONLY. STUDENT USE PROHIBITED
8-20 Monitoring and Diagnostics

The following screenshot depicts PerfView’s GCStats report, which can be used for diagnosing high
garbage collection rates and pause times:

FIGURE 8.8: PERFVIEW’S GCSTATS REPORT

You can download PerfView from the GitHub repository Releases page, where the project is
developed and maintained:
https://aka.ms/moc-20487D-m8-pg15

For more information on using PerfView to record and analyze ETW events, refer to this
series of video tutorials on Microsoft Channel 9 from PerfView’s author, Vance Morrison:
https://aka.ms/moc-20487D-m8-pg16

LTTng events in .NET Core on Linux


ETW, described in the previous topic, has more
than 15 years of successful use in the field. It has
become the foundation for many profiling and
diagnostic tools, and many .NET applications and
services use it for providing their own custom
sources of diagnostic data. Unfortunately, ETW
works only on Windows, while .NET Core is an
open source, cross-platform runtime. Accordingly,
on Linux systems, .NET Core and the EventSource
class use a different tracing implementation,
called LTTng.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-21

LTTng (Linux Trace Toolkit, next generation) is an open source project that was first released in 2005.
LTTng provides correlated application and system tracing support. LTTng works on a variety of Linux
distributions, including the distributions supported by .NET Core (such as Ubuntu, Red Hat Enterprise
Linux, and others). The LTTng architecture is fairly similar to ETW, although instead of relying on a kernel
component for collecting application events, it employs a user-space component. LTTng also has some
interesting features, which are not supported by ETW. An example for such a feature is relaying trace data
to a different machine. On the other hand, one ETW feature that is missing from LTTng is the ability to
record application call stacks with events.

To learn more about LTTng and read its documentation, see:


https://aka.ms/moc-20487D-m8-pg17

You can install LTTng from package repositories for various distributions. LTTng installs a daemon
(background service), which collects data from running sessions and pushes it to files. It also installs the
lttng command-line tool, which you can use to create a session, add events to the session, start recording
the session, and stop the session when you’re done.
The following code example shows how to install LTTng on Ubuntu and Red Hat Enterprise Linux, the two
common Linux distributions supported by .NET Core:

LTTng installation commands


# Install on Ubuntu:
apt-add-repository ppa:lttng/ppa
apt-get update
apt-get install lttng-tools lttng-modules-dkms liblttng-ust0

# Install on Red Hat Enterprise Linux:


wget -P /etc/yum.repos.d/ \
https://packages.efficios.com/repo.files/EfficiOS-RHEL7-x86-64.repo
rpmkeys --import https://packages.efficios.com/rhel/repo.key
yum updateinfo
yum install lttng-tools lttng-ust kmod-lttng-modules babeltrace

By default, .NET Core on Linux does not emit runtime and application events to LTTng. You can control
this behavior by setting the COMPlus_EnableEventLog environment variable to 1 prior to launching your
application. You can’t change this setting if you have already started the application; you will need to
restart the application for the change to take effect.

The following code example shows how to launch an application with the COMPlus_EnableEventLog
environment variable set appropriately, and then use the lttng tool to record the ExceptionThrown CLR
event:

Recording LTTng trace commands


# Create a new LTTng session
lttng create exceptions-trace

# Add context data (process id, thread id, process name) to each event
lttng add-context --userspace --type vpid
lttng add-context --userspace --type vtid
lttng add-context --userspace --type procname

# Enable any event starting with Exception


lttng enable-event --userspace --tracepoint DotNETRuntime:Exception*

# Start recording events


lttng start

# Run your application


MCT USE ONLY. STUDENT USE PROHIBITED
8-22 Monitoring and Diagnostics

COMPlus_EnableEventLog=1 ./myapp

# Stop and destroy the session


lttng stop
lttng destroy

By default, LTTng records events to a series of files that are placed in a directory that you specify. To view
the collected data, you can use several viewer tools. A very simple command-line tool for viewing LTTng
traces is babeltrace, which can read the LTTng output (in Common Trace Format, CTF). Another option is
the Trace Compass tool, which can visualize trace data and events. Lastly, if you create a .zip archive of all
LTTng’s recording directory, you can copy it to a Windows machine and open it by using PerfView.

The following code example shows how to use the babeltrace tool to display events and the output it
produces on a sample trace:

Using the babeltrace command


babeltrace ~/exceptions-trace
[07:31:11.751548909] (+?.?????????) ubuntu-16 DotNETRuntime:ExceptionThrown_V1: { cpu_id
= 0 }, { ExceptionType = "System.NotSupportedException", ExceptionMessage = "Sample
exception.", ExceptionEIP = 139767278604807, ExceptionHRESULT = 2148734229,
ExceptionFlags = 16, ClrInstanceID = 0 }
[07:31:11.751603953] (+0.000055044) ubuntu-16 DotNETRuntime:ExceptionCatchStart: { cpu_id
= 0 }, { EntryEIP = 139765244804131, MethodID = 139765233785640, MethodName = "void
[Runny] Runny.Program::Main(string[])", ClrInstanceID = 0 }

Microsoft provides the perfcollect script, which can be used to record LTTng events and put
them in an archive that you can access by using PerfView. The perfcollect script is available
on GitHub: https://aka.ms/moc-20487D-m8-pg18

Demonstration: Collecting ASP.NET Core LTTng events on Linux


In this demonstration, you will create a new ASP.NET Core app and run it in a Linux container then Use
docker exec to get a shell in the container, and install LTTng. Then, record LTTng events while hitting the
app’s endpoints a few times from a browser, and show the recorded events with babeltrace.

Demonstration Steps
You will find the steps in the “Demonstration: Collecting ASP.NET Core LTTng events on Linux“ section on
the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/blob/master/Instructions/20487D_MOD08_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-23

Lab A: Monitoring ASP.NET Core with ETW and LTTng


Scenario
In this lab, you will use ETW on Windows and LTTng on Linux to monitor exception and GC events in an
ASP.NET Core application.

Objectives
After you complete this lab, you will be able to:

• Collect and analyze ETW events with PerfView for ASP.NET Core application on windows.
• Collect and analyze LTTng events for ASP.NET Core application on Linux Docker container.

Lab Setup
Estimated Time: 30 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD08_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD08_LAK.md.

Exercise 1: Collect and View ETW Events


Scenario
Collect and analyze ETW events of ASP.NET Core application on Windows.

Exercise 2: Collect and View LTTng Events


Scenario
Collect and analyze LTTng events of ASP.NET Core application on Linux by using Docker container.
MCT USE ONLY. STUDENT USE PROHIBITED
8-24 Monitoring and Diagnostics

Lesson 3
Application Insights
Traditionally, monitoring and performance tools have focused on hardware resource consumption, such
as CPU utilization and memory usage, and trivial black-box performance metrics, such as the average
response time for a request to a specific server. With the advent of complex, distributed systems that
consist of tens or hundreds of inter-dependent services, it has become increasingly difficult to understand
the causes of increased resource consumption or degraded response times.
In this lesson, we will discuss Application Insights, an application performance monitoring tool provided
by Microsoft and hosted on Azure scale. By using Application Insights, you can go beyond monitoring
hardware resources or single-system utilization, and focus on the holistic behavior of the entire system,
trace a request as it crosses multiple services and databases, and truly understand outliers and
problematic events.

Lesson Objectives
After completing this lesson, you will be able to:
• Explain the types of application telemetry provided by typical web services.

• Explain how to add Application Insights monitoring to a web application or service.


• Describe extending the Application Insights data with custom application events.

• Describe dependency tracking with Application Insights.

• Explain how to perform service load testing with Application Insights.

Types of application telemetry


Application Performance Monitoring (APM) tools
focus on diagnosing complex performance
problems by gaining a complete understanding of
how a distributed system behaves. APM tools
monitor performance metrics, such as HTTP
response time, and hardware resource
consumption, such as CPU utilization. However,
APM tools also provide visibility into relationships
between system services and components such as
databases. For example, an APM tool can display
the database queries performed by a single call to
your web service, or the response latency of third-
party services that your service invokes.
APM tools focus on the following types of metrics and data collection:
• Real User Monitoring, End User Experience. These measure time and work from the moment a user
request was initiated (for example, page navigation in a web browser) to the moment the data was
fully received and processed on the user’s machine (for example, the page was completely loaded
and rendered by the browser). You can monitor the server-side work performed as part of a user
request by using various agents integrated to the web service or even network monitoring tools that
run on the same machine. However, monitoring time on the user’s end requires instrumentation, such
as a JavaScript framework, embedded on every page.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-25

• Transaction Monitoring and Tracing. These map and measure all the services and database systems
involved in executing a single business transaction, such as making a flight reservation and upgrading
a hotel room booking. This helps understand performance problems in low-level components and
map them to issues experienced by real users.

• Analytics and Forecasting. These present high-level statistics on commonly executed paths in the
application, user interaction patterns (for example, navigation flows in a mobile app), error rates, and
other interesting metrics. Predictive analytics tries to use the collected data to forecast future
behavior, which is important for capacity planning and projecting business growth.

• Runtime-Specific Analysis. This uses special instrumentation agents to monitor the performance of
high-level runtimes, such as .NET, Java, and Node.js, and present interesting findings, such as
exceptions thrown, garbage collection performance, and threading behavior and efficiency.

The data collected by an APM tool usually originates at the following sources:
• Servers and infrastructure. A monitoring agent can be installed on the target machine and collect
performance metrics, or cloud diagnostic tools (such as those available in Microsoft Azure) can
provide information on hardware resource utilization.
• Web application or service. Middleware, embedded into the web application or the web server itself,
can report important information on HTTP response latency, the types of HTTP status codes returned,
and the commonly accessed URLs.

• Application code. Applications and services can use a special diagnostic API (provided as part of the
APM’s library) to emit custom events and metrics.

• Application and server logs. The APM tool can aggregate and collect the log messages reported by
the application or the web server.

• Browser events. A JavaScript instrumentation library can send data to the server about browser
performance events, such as page load and rendering times, and HTTP response times as seen by the
client.

Application Insights is a complete, robust, scalable APM solution for web services and applications that
use various languages and runtimes such as .NET, Java, and Node.js. Application Insights is hosted and
scaled automatically by Microsoft Azure, but you can use it to monitor the performance of on-premises or
cloud applications. Application Insights is not just a data collection module. It provides a powerful
analytics dashboard that automatically detects anomalies, can perform profiling and load testing, and can
help explain issues that users experience when using your web application or service.

Application Insights monitors and collects the following types of data:

• Client HTTP requests and responses, including status codes, latency, failure rates, client location
information, and headers.

• HTTP requests and responses to any services on which your application depends.

• Exceptions and errors in both server processes (such as .NET) and browser applications.
• Webpage performance as reported by web browsers (for example, page rendering times).

• Raw system performance data, such as Windows performance counters or Linux metrics.

• Azure diagnostic data for Azure Virtual Machines, Azure App Services, and other sources.

• Application and server logs (traces).

• Custom events generated by the application code.


MCT USE ONLY. STUDENT USE PROHIBITED
8-26 Monitoring and Diagnostics

In addition to standard APM features such as dashboards, statistics, and analytics, Application Insights
offers several distinguishing features that help developers find the root cause of performance problems
and errors:

• Application Insights Profiler. The profiler runs in the background for a few minutes per hour and
uses low-overhead profiling techniques to show hot methods that take a long time to service
requests in your applications.
• Application Map. Application Insights automatically maps your dependencies and database calls,
and shows an interactive, navigable map of your application.

• Snapshot Debugger. You can set trace points in an application running in production, and the
debugger will capture the stack trace and values of parameters and values that you specify, so you
can refer to them in Visual Studio 2017.

For an introduction to Application Insights and numerous links to other resources, tutorials,
documentation, and videos, go to: https://aka.ms/moc-20487D-m8-pg19

Adding Application Insights to a web application


There are multiple ways to add Application
Insights to your web application or service. You
can add Application Insights support at run time,
without modifying your application’s code, or at
development time by using your favorite
development environment. In this lesson, we focus
on ASP.NET Core applications, although .NET
console applications, Windows services, Node.js
services, J2EE applications, and numerous other
runtimes are also supported. Additionally, we
focus on using Application Insights for server-side
monitoring of web services, although JavaScript
monitoring on the client side (for web applications) is also available.

Note: Application Insights works by installing an instrumentation library in your web


application or service. This library then monitors your service and sends data to the Application
Insights portal in Azure. Because the agent is integrated into your service (white-box), it has
access to fine-grained data, including exceptions, calls to other web services, and database
queries.

To add Application Insights to a live Azure web app (in Azure App Service), you enable Application
Insights from the web app’s blade in the Azure portal. This automatically turns on Application Insights
monitoring, which will collect HTTP response data, exceptions thrown, dependencies accessed by the
application, system performance data, and more.

The following screenshot illustrates how to add a new Application Insights resource to an existing, live
web app in Azure App Service. After adding the resource, the application is restarted and automatically
monitored by Application Insights.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-27

FIGURE 8.9: THE APPLICATION INSIGHTS TAB


The following screenshot depicts the overview pane of Application Insights, showing basic data pouring
into the portal:

FIGURE 8.10: THE OVERVIEW PANE OF


APPLICATION INSIGHTS
MCT USE ONLY. STUDENT USE PROHIBITED
8-28 Monitoring and Diagnostics

The following screenshot depicts the Live Stream feature in Application Insights, showing live real-time
data from the monitored web app.

FIGURE 8.11: THE LIVE STREAM FEATURE IN


APPLICATION INSIGHTS

For more information on using Application Insights with live web apps hosted in Azure App
Service, go to: https://aka.ms/moc-20487D-m8-pg20
For more information on Application Insights dashboards, navigating them, and customizing
them for the needs of your specific resources, go to: https://aka.ms/moc-20487D-m8-pg21

To add Application Insights to your ASP.NET Core web service at the time you develop it, use the Visual
Studio wizard located in Project > Add Application Insights Telemetry. The wizard adds a NuGet
package to your project and integrates into the ASP.NET Core pipeline. It will then collect data on HTTP
requests and responses, .NET exceptions, traces and logs, and more. In addition, the Visual Studio wizard
will host Application Insights in an on-premises environment, or publish it to Azure, regardless of whether
you ran the application locally during development.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-29

The following screenshot shows the Visual Studio wizard for adding Application Insights to a newly-
created ASP.NET Core web application, where you can configure the Application Insights resource, billing,
and other settings:

FIGURE 8.2: THE VISUAL STUDIO WIZARD


The following screenshot depicts the summary page of the Application Insights wizard, which shows the
key configuration steps that were performed and are now in place:

FIGURE 8.3: THE SUMMARY PAGE OF THE


APPLICATION INSIGHTS WIZARD
The Application Insights wizard adds the Application Insights instrumentation key to your application’s
appsettings.json file. This key is required for sending data to Application Insights. It is the link between
your application and the Application Insights resource.
MCT USE ONLY. STUDENT USE PROHIBITED
8-30 Monitoring and Diagnostics

Application Insights instrumentation key added by Visual Studio


{
"ApplicationInsights": {
"InstrumentationKey": "04bcbb32-bf71-4055-8335-c496b0146261"
}
}

For more information on using Application Insights with ASP.NET Core web applications and
services, go to: https://aka.ms/moc-20487D-m8-pg22

Demonstration: Integrating and viewing Application Insights


In this demonstration, you will add Application Insights to an ASP.NET Core application and view the data
in Azure Portal.

Demonstration Steps
You will find the steps in the “Demonstration: Integrating and viewing Application Insights“ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD08_DEMO.md.

Tracking custom events with the Application Insights SDK


Although Application Insights tracks a wealth of
data from the platform, system, and web runtime
on your behalf, you can often benefit from
reporting custom telemetry and correlating it with
other data to diagnose problems faster and more
efficiently. The Application Insights API, which you
can use from various platforms, including ASP.NET
Core, provides multiple methods for sending
additional telemetry to your Application Insights
resource. These methods include the following:
• TrackPageView. This method is for tracking
page views in web applications, and screen
switches in mobile applications.

• TrackEvent: This method is for tracking generic events and user actions such as button clicks or
transitions between text boxes in a form.

• TrackMetric. This method is for tracking generic performance metrics such as the number of threads
processing a specific request.

• TrackException. This method is for tracking exception information and stack traces.

• TrackRequest. This method is for tracking all types of requests performed by the server, supporting
latency analysis on request duration and frequency.

• TrackTrace. This method is for tracking diagnostic messages and logs.

• TrackDependency. This method is for tracking calls and durations to any external component, such
as a database, a storage system, or a web service.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-31

For more information on using the Application Insights SDK, go to:


https://aka.ms/moc-20487D-m8-pg23

To start using the Application Insights API from your .NET application, add the Application Insights SDK to
your project, and then create an instance of the TelemetryClient class. Automatically, the Application
Insights instrumentation key from your appsettings.json will be used to send data to the appropriate
Application Insights resource.

The following code example demonstrates how to create a new instance of the TelemetryClient class:

Creating telemetry client


public static readonly TelemetryClient Telemetry = new TelemetryClient();

The following code example demonstrates how you can track a database query to an external database,
which might not be supported by Application Insights for some reason:

Tracking database query by using Application Insights


public async Task<PassengerManifest> GetManifestForFlight(string flightId)
{
using (var dep = Telemetry.StartOperation<DependencyTelemetry>(
$"manifest query for flight {flightId}")
{
// … The actual code of the query
}
}

In the preceding example, no additional information other than the operation title is provided to
Application Insights. Nonetheless, when running in an ASP.NET Core context, Application Insights will
automatically track the HTTP request being handled, and some additional information. For further
customization (to include additional data in the event), set properties on the DependencyTelemetry
class. For example, you might want to set the Data property to the database query performed. You can
also add arbitrary application-defined values to the Properties and Metrics dictionary properties.

The following example demonstrates how you can track an arbitrary custom event, such as cancelling a
flight reservation:

Tracking custom event


var data = new Dictionary<string, string>() {
{ "FlightNumber", "BY001" },
{ "ReservationCode", "AB78DE" }
};
Telemetry.TrackEvent("FlightCancelled", properties: data);

The following example demonstrates how you can track a custom performance metric, such as the
number of threads currently servicing requests in a custom thread pool implementation:

Tracking custom performance metric


private void RebalanceThreadPoolThreadCount(int newCount)
{
// … Do the actual work of ensuring there are newCount threads
var metric = new MetricTelemetry {
Name = "threadpool/numthreads",
Value = newCount
};
MCT USE ONLY. STUDENT USE PROHIBITED
8-32 Monitoring and Diagnostics

Telemetry.TrackMetric(metric);
}

Note: During development, you might want to temporarily disable telemetry. You can do
so by setting the TelemetryConfiguration.Active.DisableTelemetry property to true.
Alternatively, you might want to use a separate Application Insights resource for development or
to test telemetry, to avoid getting it mixed with the production telemetry data.

When performing high-frequency data ingest, you might want to use sampling to reduce
traffic and data costs for your Application Insights resource. For more information on using
sampling, go to: https://aka.ms/moc-20487D-m8-pg24

Dependency monitoring and Application Mapping


After data starts flowing into your Application
Insights resource, you can use a variety of
dashboards and queries to analyze your data,
detect trends, drill into problematic behavior,
pinpoint anomalies, and understand your
application’s architecture. The Azure Log Analytics
query language underpins the Application Insights
portal, and can be used to query, filter, group,
sort, and extract the data that you need, in a
variety of formats and visualizations. Your data in
Application Insights is sent to several tables, such
as requests, exceptions, and dependencies.

To learn more about the Azure Log Analytics query language, go to:
https://aka.ms/moc-20487D-m8-pg25

The following query retrieves the top 10 countries by traffic in the past 24 hours, by starting from the
requests table, and then adding a filter by timestamp, grouping by the client’s country or region
(determined automatically from their IP address), and rendering the results as a pie chart:

Azure Log Analytics example query


requests
| where timestamp > ago(24h)
| summarize count() by client_CountryOrRegion
| top 10 by count_
| render piechart
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-33

The following screenshot depicts the result of executing the above query in the Application Insights
portal:

FIGURE 8.4: SCREENSHOT OF THE RESULT OF A


QUERY IN THE APPLICATION INSIGHTS PORTAL
The following query joins the results from two tables—requests, which contains all HTTP requests
performed by your service, and exceptions, which contains all the exceptions. It displays the failed
requests where an exception occurred during processing, along with the exception type:

Azure Log Analytics example query


requests
| where success == "False"
| join (exceptions) on operation_Id
| take 10
| project client_CountryOrRegion, url, innermostType, innermostMessage

The following screenshot depicts the result of executing the above query in the Application Insights
portal:

In addition to just looking at the requests and errors in your application itself, you can also analyze data
from any external dependency calls performed by your application. For example, if your service makes
HTTP requests to another service, or if your service uses table storage, databases, and other external
resources, this data is tracked in the dependencies table.

The following query extracts all failed requests to dependencies of type SQL (which are databases) and
groups the operation by the SQL query executed, from the data column:

Azure Log Analytics example query


dependencies
| where timestamp > ago(24h) and type == 'SQL' and success == false
| summarize count() by data
MCT USE ONLY. STUDENT USE PROHIBITED
8-34 Monitoring and Diagnostics

The following screenshot depicts the output of the above query, illustrating that over the last 24 hours,
there were 1,435 failed SQL statements with the same text—inserting a value into the ServiceTickets
table:

FIGURE 8.6: THE OUTPUT OF A QUERY


This automatic dependency analysis is extremely useful, and in fact you can use it not only for queries, but
for more advanced visualizations as well. Application Insights provides the Application Map feature,
which is an interactive diagram of your services, clients, and external dependencies that provides a bird’s-
eye view of your system’s health. If necessary, you can drill into the Application Map to look at specific
components and track down issues.
The following screenshot depicts a sample Application Map. The external dependencies, in this case, two
HTTP services, are shown next to the monitored service:

FIGURE 8.7: A SAMPLE APPLICATION MAP

For more information on Application Map and the next-generation Composite Application
Map (which is in preview at the time of writing), go to: https://aka.ms/moc-20487D-m8-pg26

Demonstration: Viewing application dependencies and request timelines


In this demonstration, you will show dependencies in Application Insights like calling external HTTP
service or using Table Storage.

Demonstration Steps
You will find the steps in the “Demonstration: Viewing application dependencies and request timelines “
section on the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-
Azure-and-Web-Services/blob/master/Instructions/20487D_MOD08_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-35

Load testing with Application Insights


Load testing (or performance testing) is the
practice of sending a real or artificial workload to
your service with the purpose of testing its
scalability, failure conditions, and general ability to
perform under load. Numerous load testing
frameworks are available, with a great variety of
features, including Visual Studio 2017 itself, which
can perform a load test directly or orchestrate a
number of load test clients that will direct traffic
to your service. Although you can perform load
testing on any accessible web service, a common
practice is to use a separate staging or
development deployment of your service and direct the load test traffic to that deployment.
For even greater robustness and high-capacity testing, you can use the Azure infrastructure for load
testing from Visual Studio Team Services (VSTS) or directly from the Azure portal. When you run a load
test on a web service that has Applications Insights enabled, you can extract a wealth of information on its
behavior under load, failure rates, and unusual conditions. You can also perform capacity planning by
identifying the key hardware bottlenecks (for example, CPU is fully utilized) and the application
performance indicators (for example, response latency spikes up to 500ms).
To create a new performance test for your web app hosted in Azure App Service, go to the Performance
test tab in the DEVELOPMENT TOOLS section, and then create a new performance test and run it from
the portal. Remember to configure Application Insights on the web app before starting the load test.
Then, you can explore the test progress in real-time by using the basic load test UI, and simultaneously, by
using Application Insights.

Note: If you don’t have a Visual Studio Team Services account, you will need to create one.
The Azure portal will automatically suggest that you create an account, or help pick one of your
existing accounts that can be associated with the Azure App Service.

The following screenshot depicts the Performance test tab for a web application hosted in Azure App
Service in the Azure portal:

FIGURE 8.8: THE PERFORMANCE TEST TAB


MCT USE ONLY. STUDENT USE PROHIBITED
8-36 Monitoring and Diagnostics

The following screenshot depicts the performance test configuration dialog box, where you can specify
the duration of the load test and the simulated user load:

FIGURE 8.9: THE


PERFORMANCE TEST
CONFIGURATION DIALOG
BOX
The following screenshot depicts the performance test details during the test execution, which provides a
preview of the application’s performance under load:

FIGURE 8.10: THE PERFORMANCE TEST DETAILS


DURING THE TEST EXECUTION
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-37

Note: You can easily overload an important service (in other words, carry out a denial of
service attack), by using load tests on the Azure load testing infrastructure. Take great care to
only test systems under your direct control, never test production instances serving an important
load, and make sure other people in your organization are aware of the load test.

For more advanced load testing scenarios, you should use Visual Studio Team Services
directly. Refer to the quick start and documentation at: https://aka.ms/moc-20487D-m8-pg27
MCT USE ONLY. STUDENT USE PROHIBITED
8-38 Monitoring and Diagnostics

Lab B: Monitoring Azure Web Apps with Application


Insights
Scenario
In this lab, you will use Application Insights to monitor and diagnose a web service running in Azure Web
Apps.

Objectives
After you complete this lab, you will be able to:

• Add Application Insights to your application.

• Load test your service using Azure.

• Analyze performance using Application Insights.

Lab Setup
Estimated Time: 30 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD08_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD08_LAK.md.

Exercise 1: Add the Application Insights SDK


Scenario
Add Application Insights to an ASP.NET Core application and publish it to Azure.

Exercise 2: Load Test the Web Service


Scenario
Perform load test to the ASP.NET Core application using Application Insights.

Exercise 3: Analyze the Performance Results


Scenario
Analyze the performance in Azure Portal.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure™ and Web Services 8-39

Module Review and Takeaways


In this module, you learned how to monitor WCF and ASP.NET Web API services and how to collect
diagnostic information from them. You also learned how to monitor IIS by using performance counters,
and how to collect information from services running in Azure. Finally, you learned how Azure collects and
displays common metrics for web apps, cloud services, and storage services, making it easy to see at a
glance whether your application is experiencing performance problems or other issues.

Best Practice
Invest considerable time in instrumenting your application with tracing and performance counters. Make
sure you can successfully monitor the application in the development environment. This will make it easier
to monitor in Azure, and guarantee that you can diagnose problems, such as under heavy load, which
occur only in the production environment.

Review Question
Question: How can you monitor applications running in Azure?

Tools
• Microsoft Visual Studio 2017

• Windows Communication Foundation (WCF) Service Configuration Editor


• SvcTraceViewer.exe
MCT USE ONLY. STUDENT USE PROHIBITED
8-40 Monitoring and Diagnostics
MCT USE ONLY. STUDENT USE PROHIBITED
9-1

Module 9
Securing Services On-premises and in Microsoft Azure
Contents:
Module Overview 9-1

Lesson 1: Explaining Security Terminology 9-2


Lesson 2: Securing Services with ASP.NET Core Identity 9-9

Lab A: Using ASP.NET Core Identity 9-22

Lesson 3: Securing Services with Azure AD 9-23


Lab B: Using Azure Active Directory with ASP.NET Core 9-44
Module Review and Takeaways 9-45

Module Overview
Security is a major concern for many distributed applications. Key security issues that you must address
when you design a web service include authentication, authorization, and secured communication.
Managing identities in distributed systems can be challenging. Identities are often shared across
application and organization boundaries. Claims-based identity is a modern approach designed to
overcome these challenges in distributed systems. This module describes the basic principles of modern
identity handling. The module also demonstrates how to use infrastructures such as Microsoft Azure
Active Directory (Azure AD) to implement authentication and authorization with claims-based identity in
Microsoft ASP.NET Core applications. The module covers both intra-organization authentication and B2C
authentication scenarios.
By applying the concepts and technologies covered in this module, you can simplify authentication and
authorization in your distributed applications integrating with modern identity providers.

Note: The Azure portal UI and Azure dialog boxes in Microsoft Visual Studio 2017 are
updated frequently when new Azure components and SDKs for Microsoft .NET are released.
Therefore, it is possible that some differences will exist between screenshots and steps shown in
this module, and the actual UI you encounter in the Azure portal and in Microsoft Visual Studio
2017.

Objectives
After completing this module, you will be able to:

• Describe the basic principles of claims-based identity.

• Describe the authentication and authorization flows in OpenID Connect, including Server-to-Server
authorization.

• Integrate client applications and authenticate users by using Microsoft Authentication Library (MSAL).
MCT USE ONLY. STUDENT USE PROHIBITED
9-2 Securing Services On-premises and in Microsoft Azure

Lesson 1
Explaining Security Terminology
Before you understand how to implement security in your services, it is important that you understand
why securing services is important and what security features are available to secure web services. This
lesson provides you with an overview of security terminologies.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe the encryption process.

• Describe secured connection using SSL/TLS.

• Describe process of authentication and authorization.

Symmetric/Asymmetric encryption
In today's network and computer environments,
providing security for data is becoming
mandatory. Encryption is one of the methods for
providing such security and its main intention is to
protect user information that is being transmitted
between a browser and a remote server.
Such information can be passwords, personal
details, payment information, or any information
that is considered private. In addition to
protecting information over the network,
organizations or individuals also protect their
information stored on local computers, servers,
and the mobile devices they own.

The encryption process applies an encryption algorithm, which is a mathematical algorithm that is applied
to the data, by using an encryption key. The encryption process generates encrypted text, which is also
known as ciphertext. This text can be converted back to its original form only by applying the original key.
This process is called decryption.

The two most widely used methods for encryption are:

• Symmetric Encryption. Using the same key to encrypt and decrypt the information

• Asymmetric Encryption. Using different keys for encryption and decryption.

Symmetric encryption
Symmetric encryption is easy, fast to implement, and has been in use for many years. The key can be a
string, a number, or a combination of random letters.

A wide range of symmetric key ciphers is still being used. An example is AES (AES-128/AES-192/AES-256),
which stands for Advanced Encryption Standard used by government agencies to protect their data. Other
examples include Blowfish, RC4, DES, RC5, and RC6.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-3

Symmetric encryption has the following drawbacks:

• Because the same key is used for both encryption and decryption, the key needs to be shared
somehow between the sender and the receiver. This means that if the key is exposed or lost it needs
to be regenerated and distributed again.

• It does not scale very well because each type of application and user requires different keys.
Regenerating and maintaining keys are difficult tasks.

Asymmetric encryption
Asymmetric encryption, which is also called public key cryptography, uses two different keys—public key
and private key—that are linked together mathematically. The public key is, as the name implies, public. It
can be shared and used by anyone who wants to send information. The sender can encrypt data using a
public key.

The receiver uses the private key to decrypt the data. The receiver needs to keep the private key secure.
To prevent brute-force attacks, a private key needs to be complex and long. Many cryptographic
processes use symmetric cryptography to efficiently encrypt the data but asymmetric cryptography to
exchange the key.

HTTPS with SSL/TLS


Hypertext Transport Protocol (HTTP) is the
foundation of data communication for the web,
and it defines how messages are formatted and
transmitted. However, it is an unencrypted
protocol, which is not secure for transferring
sensitive data between a client and a remote
server. Therefore, using HTTP for transferring
sensitive data such as passwords, emails, and
payment transactions exposes users to the risk of
data exposure.
HTTPS stands for HTTP Secured. It has become the
standard for securing HTTP requests for many
websites that deal with sensitive data. Banks, insurance agencies, content providers, and even social
networks such as Facebook and Twitter use HTTPS.

In HTTPS, all communication between the client and the server is encrypted.

Two major secure protocols are used to encrypt the communication:


• Secure Sockets Layer (SSL)

• Transport Layer Security (TLS), which is a more recent protocol that aims to replace SSL

You can identify whether a site uses SSL encryption through several visual hints. The address will start
with https://, the background color of the address bar may change, a padlock icon may appear near the
address bar, and sometimes an SSL certificate logo of the certificate authority (CA) will appear on the site.
MCT USE ONLY. STUDENT USE PROHIBITED
9-4 Securing Services On-premises and in Microsoft Azure

How does SSL Work?


1. To apply the SSL encryption, you must obtain an SSL certificate from a CA. After the CA issues the SSL
certificate, install it on the server. The SSL certificate contains the public key and the private key.

2. An SSL-handshake is made for any client that connects to the server. During the handshake phase,
the server and the client agree on the algorithm method (SSL/TLS) and its version. Then the server
sends the client a certificate, which is the public key.

3. The client validates the certificate and generates a session key (a third key), which is used for the
session. This key needs to be sent back to the server , it is being encrypted with the server public key.
Only the server that has the private key can decrypt this message and decipher the session key that
was generated.

4. During the session, all messages between the client and the server will be encrypted by using the
symmetric key.

Benefits of the SSL Encryption methods


By using asymmetric encryption to send the session key and symmetric encryption during the session
communication, both the client and the server benefit from a secure connection, which is both fast and
secure. This allows the server to scale this method to many more clients.

Identity and credentials


In computer science, an identity, also referred to
as digital identity is the term used to describe an
entity in the digital world. An entity may be a
person (the user of the system), an organization,
an application, or even a device. It is usually used
in the context of the users of a system, direct
interaction.

Just like a civil personal identity, which consists of


a person’s first and last name, SSN, birth date, and
ID number, computer systems need to manage a
set of characteristics on a digital identity. For
example, an email address, password, digital
token, or digital certificate.

One of the greatest challenges of the digital cyberspace is to identify the entities that interact with the
system. For example, a social network app has to identify its users to allow them access to their private
space in the network and a bank has to identify its customers to allow them access to their bank account.

Digital identity will usually hold one or more attributes associated with it. Identity attributes include
usernames, passwords, email addresses, phone numbers, or any other information provided by the user.
Attributes that are mandatory for verifying the existence of a digital identity in a system are called
credentials.
A credential is a set of attributes that are validated against the system authorization to verify that an
entity is legitimate. For example, a bank account may require its users to identify themselves when
logging into the system by providing three attributes—an email address, a username code, and a personal
password the user has chosen. These three attributes are the credentials required by the bank. Other
systems may require a different set of credentials such as email and phone number. Credentials for a
system may even be in a digital form such as a digital signature or biometrics.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-5

Authentication and authorization


In the previous section, you learned that an
identity represents a digital entity that needs to
access a secure resource. To grant access to such
an identity and validate its claims against
predefined rules, the system needs an
authentication mechanism.

Authentication
Authentication is the process of confirming that
an identity is in fact, who it claims to be.

In other words, it is a mechanism, which ensures


that such an entity exists in the system and that
the claim it provides matches the claims that are stored in the system.
Here is a typical scenario. A user of a system needs to log in to the system by providing the username and
password. On the log-in screen, the user is asked to enter the username and the password. The
authentication process takes those claims, verifies that the username exists in the system, and the
password matches the password stored in the system. Some systems, such as personal computers and
airport terminals, authenticate an individual by using fingerprint scanners and even image matching
algorithms. An authentication process is critical and important for organizations that want to stop
unauthorized users from accessing their systems.

Authentication policies and types


The administrator of a system can further improve the security of a system by enforcing several rules for
the authentication process. These rules include:

• Enforcing expiration rules on passwords

• Preventing users from reusing their previous passwords


• Enforcing password policy rules such as the length and complexity of a password (For example, the
password must be at least eight characters long and must be a combination of letters and numbers)
• Enforcing two-factor authentication

• Blocking users that did not sign into the system for a long time

On the other hand, some systems allow users to use their credentials for other services (typically a social
network or a service such as a Microsoft account). This allows the authentication process to grant access
to the system based on the third-party service credentials. This is a very popular technique in many
authentication systems today. It helps the users to reduce the number of credentials they need to
memorize to access different services.
A common security enhancement for authentication systems is called two-factor authentication (TFA).
Those systems will require the users to supply different sets of credentials. For example, a user may have
to provide a username and password combination, and then enter a code or a token shared with the user
through email, SMS, or a dedicated app. Such a system will grant access only if the user of the system has
a corresponding device, such as a mobile phone, to which the code is sent. This code will be valid only for
a limited period. This way, only the holder of the physical device can use the code and gain access to the
system.
MCT USE ONLY. STUDENT USE PROHIBITED
9-6 Securing Services On-premises and in Microsoft Azure

Authorization
Allowing access to a computer system by identifying and authenticating users is not enough. Just by
getting authenticated, a user won’t get the rights to perform all the actions in the system. Authorization is
the process that follows authentication. The goal of authorization is to determine what actions a user is
allowed to perform.

For example, a secured computer system may restrict several users from performing sensitive actions in
the system by categorizing them into groups such as administrator, managers, and users. Administrators
may have access to every part of the system. They will be able to change policies and add and remove
users.
Managers may have access only to an area where they can modify data. For example, updating stocks,
prices, and delivery options in a commercial application. Users may have access only to an area where
they can buy products and manage their shopping cart. A user cannot update the price of a product, and
a manager cannot remove users from the system.

The Authorization mechanism helps determine which identity has access to which resource of the
system.
Here is a summary of the differences between authentication and authorization:
• Authentication determines who the user is.

• Authorization determines what a user can do.

Authentication modes

Identity Provider (IdP) and Service


Providers
So far, we have looked at a simple scenario where
a user needs to access a resource within one
organization. Consider a system where a user
needs to access multiple resources from several
system providers. One example could be a user
who needs to sign into the organization account,
the Microsoft Office 365 account, and the
Salesforce account. In that case, the user will need
to memorize many credentials for each system.

Single sign-on (SSO) is the concept of authenticating just once and reusing that authentication
information to access multiple services without having to reauthenticate at every sign in. For the SSO
scenario to work, enterprise systems use an IdP to perform the authentication process. The IdP is the
organization that maintains a directory of users and authentication mechanisms.

The organization that hosts the target application is called the application service provider (ASP).
In most IdP systems, authentication is done by sending back a signed token that has the credentials and
trust signature for the requester. Those tokens can be retrieved in several formats such as SAML, oAuth2,
and Open Connect.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-7

In a scenario where a user has an account with the IdP and wants to use an application in the service
provider, several authentication modes can be used. Here are two examples:

• Passive authentication. When the user accesses the service provider resource, such as Salesforce and
Office, the service provider will redirect the request to the federation server, which will contact the IdP
federation server to generate a token. The user will be redirected to the IdP sign-in page. After
successfully signing in, the user will get the token to access the service provider resource.
• Active Authentication. In an active authentication scenario, the client connects to the IdP directly,
receives the token, and then uses the token to authenticate access to the service provider. An
example of active authentication is a mobile device verifying a user's identity continuously by using
its sensors.

Claim-based identity and authentication


Building a user identity system for each
application that needs to authenticate users is a
complex task. It involves building user databases,
storing users’ information in a dedicated format
for a particular application, building a dedicated
communication framework, and repeating these
tasks for every new application. In most cases, user
data needs to be duplicated between systems,
generating problems of data integrity.

In addition, a user identity system will not allow


SSO for the same user across multiple services and
platforms. Claims-based Identity solves these
problems by providing a common mechanism to describe an identity in the system by using a claim. A
claim is a statement about an identity in the system. Examples include a user, group, or device, its set of
attributes and values, its roles and permissions, and its issuer information.
A set of claims is combined into a token. The token is digitally signed by the token issuer when it’s created
so that it can be verified at the receiver end. The token can also contain additional information such as an
expiry date or id. For example, a signed token may hold three claims in it—one with the user’s name,
another with the user’s role, and a third with the user’s age. Tokens are created by using a software tool
called security token service (STS). Popular formats for claims-based tokens include XML or the Security
Assertion Markup Language (SAML), and JSON Web Tokens (JWT).

Claims-based Authentication
In a typical scenario, an application, such as a web browser or some other client, working on behalf of a
user, asks an STS for a token that contains the claims for this user. The STS authenticates the user so that
the STS can confirm the identity of the user (for example, verifying passwords or validating tickets).

Typically, the request sent to an STS contains a Uniform Resource Identifier (URI) that identifies the
application that the user wants to access. The STS then looks up information about both the user and the
application from a database that maintains account information and other attributes about users and
applications. This can also be accomplished by using the Active Directory service. After the STS finds what
it needs, it generates the token and returns it to the requester.
MCT USE ONLY. STUDENT USE PROHIBITED
9-8 Securing Services On-premises and in Microsoft Azure

Claims, tokens, and STSs are the foundation of claims-based identity. The idea is to let a user present
digital information to an application in a unified manner so that the application can make a decision
about the user that presented the claims-based token. The user will usually get a token about an
application from an STS. After the user gets the token, the client sends it to the application, which is
configured to work with one or more trusted STSs.

To process the token, the application depends on an identity library, which verifies the token’s signature,
so that the application knows which STS issued the token. If the application trusts the STS that issued this
token, it accepts the token’s claims as correct and uses them to decide what the user can do. For example,
if the token contains the user’s role, the application can assume that the user really has the rights and
permissions associated with that role.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-9

Lesson 2
Securing Services with ASP.NET Core Identity
ASP.NET Core Identity is a membership system that adds logon functionality to ASP.NET Core apps. Users
can create an account with the logon information stored in the identity or they can use an external logon
provider. Supported external logon providers include Facebook, Google, Microsoft, and Twitter.

Lesson Objectives
After completing this lesson, you will be able to:

• Define ASP.NET Core Identity.


• Explain how to add authentication to ASP.NET Core apps.

• Explain how to use claims-based authorization with ASP.NET Core Identity.

• Explain how to use ASP.NET Core extensibility features to add authentication with social networks.

What is ASP.NET Core Identity?


As we learned in topic 1, identity is the set of rules
and processes that defines the users of a system
and the users’ privileges and access rights to
different resources of the system. Authentication
is the process that confirms the identity of the
users and authorization is the process that
determines the access rights of the users in the
system. ASP.NET Core Identity is a complete
solution for both authentication and authorization
rules.

Authentication Capabilities
The authentication capabilities of ASP.NET Core
Identity help build a membership system around user logons and credentials by defining several logon
techniques such as username and password combinations, OAuth with token authentication, social
network logons, and advanced features such as two-factor authentication and password recovery.

Authorization Capabilities
The authorization capabilities of ASP.NET Core Identity help define:

• Simple authorization. Controls what users logged on to the system can see regardless of their roles or
claims.
• Role-based authorization. Allows access to certain resources based on the user roles. For example, in a
content management system, the system may have a user role with read-only access, an editor role
for modifying the content, and an admin role that can modify system settings and grant access to
different users.

• Claim-based authorization. Restricts the access to the resource for a subset of users that meet certain
criteria such as all users under a certain age and all workers with a particular employee number. This
fine-grained control gives a lot of flexibility to security systems.
MCT USE ONLY. STUDENT USE PROHIBITED
9-10 Securing Services On-premises and in Microsoft Azure

• Policy-based authorization. By defining a policy rule, the system can grant access according to that
rule. For example, giving access to a resource only between 09:00 AM and 06:00 PM.

You can configure ASP.NET Core Identity to use the Microsoft SQL Server database to store users, users’
passwords, claims, and roles, and also manage sign-in sessions. In addition, you can use ASP.NET Core
Identity with your own persistent storage such as MongoDB or Microsoft Azure Table storage.
ASP.NET Core Identity combines authentication and authorization capabilities with the power of Entity
Framework. For a developer, this gives great flexibility and productivity when adding security layers to
web apps.

Adding ASP.NET Identity to Web APIs


ASP.Net Core allows you to build a starter project
very easily by using the CLI tools. In this topic, you
will explore several ASP.NET Core Identity project
templates. The different command options for
each project template works the same whether
you are using ASP.NET Core MVC project, Razor
project, Web API, or any web app with a custom
UI toolkit such as Angular or React. In this topic,
we will explore the identity options with Web API
projects.

The default command for creating a website


project using the CLI tool is:

Creating a website command


dotnet new mvc -o myWebSite

This creates a default .Net Core MVC project with no authentication capabilities. Therefore, all APIs in this
project are accessed by all users, and no user needs to be logged on.
When you wish to add an authentication layer to your project, you need to specify the --auth flag which
has the following possible values:

• None. No authentication (default)

• Individual. An individual authentication layer where identity management is done on the website
itself

• IndividualB2C. Individual authentication with Azure AD B2C


• SingleOrg. Organizational authentication for a single tenant

• Windows. Windows authentication

The command for creating an ASP.NET Core Web API project with individual authentication is:

Creating an ASP.NET Core Web API project with an individual authentication command
dotnet new mvc -o myApiSite --auth Individual --use-local-db
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-11

This command creates a new .NET Core MVC project named myApiSite with an authentication
middleware of individual accounts stored in a LocalDB database, which is managed by Entity Framework.
When not specifying the --use-local-db flag, a default SQLite database will be created. To restore all
dependencies and initialize the database tables you need to run scaffolding.

Scaffolding
ASP.Net Core 2.1 Identity is implemented as part of the Razor Class Library. Because of this, the default
application project template does not include the source code for the identity framework. However,
sometimes it is useful to add scaffolding code that will allow you to modify certain default behaviors. The
following procedure will create the auto-generated classes that are being used by the framework for a
register, logon, and logout scenario.

To enable scaffolding, you first need to install the code generator tool of ASP.Net Core Identity.

Installing a code generator tool command


dotnet tool install -g dotnet-aspnet-codegenerator
dotnet add package Microsoft.VisualStudio.Web.CodeGeneration.Design
dotnet restore

Next, you will need to run the code generator for the requested files:

Running a code generator command


dotnet aspnet-codegenerator identity -dc auth05.Data.ApplicationDbContext --files
"Account.Register;Account.Login;Account.Logout"

Note: You will need to use the real ApplicationDbContext namespace for your project.

The next step will be to run the database migrations and the seed code that creates the database tables
used for the Identity framework.

Database migration command


dotnet ef migrations add CreateIdentitySchema
dotnet ef database update
dotnet build

The CLI tool will create all the scaffolding classes needed for the Identity System along with some UI
classes that will enable users to perform logon, logout, and registration.

It is important to notice the following code changes made by the creation process:
• The application.settings code contains the connection string for the LocalDB database. This is the
place to plug in another connection string if you decide to work with different storage provider.

• An Area/Identity folder, which contains all the scaffolding code. Among the files that were
generated it is interesting to look at:

o LoginModel. A class that will hold the logon action.

o RegisterModel. A class that will hold the register operation.

o LogOutModel. A class that will hold the logout operation.

• A Startup.cs file which contains all the bootstrap code needed by the identity system.
MCT USE ONLY. STUDENT USE PROHIBITED
9-12 Securing Services On-premises and in Microsoft Azure

By looking at the startup.cs file, you can see the code that was generated by the .NET scaffolding process:

Generated code of the identity system configuration


public void ConfigureServices(IServiceCollection services)
{
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlServer(
Configuration.GetConnectionString("DefaultConnection")));
services.AddDefaultIdentity<IdentityUser>()
.AddEntityFrameworkStores<ApplicationDbContext>();

services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);

The code plugs in the Entity Framework DbContext into ASP.NET Core built-in Dependency Injection (DI)
services. Then, it adds the identity framework.
Finally, the Configure method on the Startup.cs class will enable authentication:

Enabling an authentication code


public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
….
app.UseAuthentication();
app.UseMvc();
}

This will enable authentication on the ASP.NET project that we have created.
There are more scaffolding options and to see them all, run the following command:

Code generator options command


dotnet aspnet-codegenerator identity --help

In the next lesson, we will add users and roles to the system and see how it works.
After the dotnet restore process and the database update process are finished, the following tables will be
created automatically by the Entity Framework seed code:

• AspNetUsers. A table to store all the users, their email addresses, and their hashed-passwords.
• AspNetUserLogin. A table to store all the logon sessions of a user.

• AspNetRoles. A table to store the roles in the system.

• AspNetUserRoles. A relation table between a user and its roles.


• AspNetUserClaims. For claims-based authentication of users, this table will hold all the user’s claims
IDs and values.

• AspNetRoleClaims. For claims-based authentication of roles, this table will hold all the claims IDs
and values for a given role.

• AspNetUserTokens. A table that is used to store user tokens that were authenticated by using an
external OAuth token provider.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-13

Authenticating users
Any Security system involves managing users,
groups, and their credentials and access rights to
different resources. In this topic, you will examine
how authentication and authorization are
achieved in ASP.NET Core and what services can
be used in each scenario.

By examining authentication capabilities in


ASP.NET Core Identity, there are several aspects
you need to support on your website. Those
aspects are located in the Area/Identity part of
the project you created in the previous topic.

Each time you run a scaffolding command against


the project, new classes that represent the model, the controller API, and the UI representation will be
auto-generated.

Register API
The register endpoint allows users to join your website by providing their username and password or any
other authentication such as social network credentials or oAuth tokens.
Here is the register code from the Register.cshtml.cs class that was auto-generated:

Generated code of the RegisterModel class


[AllowAnonymous]
public class RegisterModel : PageModel
{
private readonly SignInManager<IdentityUser> _signInManager;
private readonly UserManager<IdentityUser> _userManager;
private readonly ILogger<RegisterModel> _logger;
private readonly IEmailSender _emailSender;

public RegisterModel(
UserManager<IdentityUser> userManager,
SignInManager<IdentityUser> signInManager,

ILogger<RegisterModel> logger,
IEmailSender emailSender)
{
_userManager = userManager;
_signInManager = signInManager;
_logger = logger;
_emailSender = emailSender;
}

As with any model class that is part of the identity framework, the class holds several services such as:

• UserManager<IdentityUser> userManager. A manager class that handles user operations such as


add and remove users and generating tokens for users.

• SignInManager<IdentityUser> signInManager. A manager class that handles logon operations


such as logon, logout, and two-factor authentication.

• ILogger<RegisterModel> logger. A logger class.

• IEmailSender emailSender. An email sender class for verifying emails.


MCT USE ONLY. STUDENT USE PROHIBITED
9-14 Securing Services On-premises and in Microsoft Azure

All those services are injected by using the built-in Dependency Injection (DI) mechanism of .NET Core
apps.

The registration code inside the class handles the request after the user has filled in the register form and
sent their credentials (username and email address):

Registration code
public async Task<IActionResult> OnPostAsync(string returnUrl = null)
{
returnUrl = returnUrl ?? Url.Content("~/");
if (ModelState.IsValid)
{
var user = new IdentityUser { UserName = Input.Email, Email = Input.Email
};
var result = await _userManager.CreateAsync(user, Input.Password);
if (result.Succeeded)
{
_logger.LogInformation("User created a new account with password.");

…//Email Verification Code dropped for Brevity

await _signInManager.SignInAsync(user, isPersistent: false);

return LocalRedirect(returnUrl);
}
foreach (var error in result.Errors)
{
ModelState.AddModelError(string.Empty, error.Description);
}
}

// If we got this far, something failed, redisplay form


return Page();
}

Note: The user’s password will be saved by using the hashing mechanism so that it will be
kept securely in the database.
Note: The code already contains methods for using email confirmation process by using
the EmailSender service, although this is turned off by default.
Note: After a user who has registered successfully is logged on to the system by default.

Login API
The Login API allows users to provide their credentials (username and password) and be authenticated by
the identity middleware. The identity middleware will provide a query against the users table in the
database to verify if there is a match between the given credentials and the credentials stored on the
database. The password the user supplies during this process is hashed with the same algorithm it was
hashed with when the user registered in the system. This way, by applying on-direction hash-function,
only the password hashes are compared, and no user password is saved to the database.

Using the built-in SinInManager utility class to sign in by using a password.

Sign-in code
public async Task<IActionResult> OnPostAsync(string returnUrl = null)
{
returnUrl = returnUrl ?? Url.Content("~/");
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-15

if (ModelState.IsValid)
{
// This doesn't count login failures towards account lockout
// To enable password failures to trigger account lockout, set
lockoutOnFailure: true
var result = await _signInManager.PasswordSignInAsync(Input.Email, Input.Password,
Input.RememberMe, lockoutOnFailure: true);
if (result.Succeeded)
{
_logger.LogInformation("User logged in.");
return LocalRedirect(returnUrl);
}
/// dropped for brevity
return Page();
}
}

You can clearly see that the user logged on using the built-in SignInManager utility class that handles all
the password hashing, checks against the database for that user, and then returns a result whether the
user is authenticated.

Claims-based authorization
You have learned how to add an authentication
layer to your ASP.NET Core app. You have
provided your app with capabilities to register
users, logon, logout, and manage the user
information in a database.

In this topic, you will add an authorization layer.


Recall that authorization is the process that
follows authentication and its role is to determine
what an identity (user/group/device) is allowed to
do in a system according to its privileges.

Basic Claims Authorization


Recall from lesson 1 that a claim is an attribute of an identity, represented by a key-value combination.
Claims are aggregated into a token and are signed using a trusted authority.

To enable claims, you need to register and build the claim. This is done as part of the ConfigureService
code in the Startup.cs file:

Enabling claims
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();

services.AddAuthorization(options =>
{
options.AddPolicy("ManagerOnly", policy => policy.RequireClaim("IsManager"));
});
}
MCT USE ONLY. STUDENT USE PROHIBITED
9-16 Securing Services On-premises and in Microsoft Azure

This will register a new policy called ManagerOnly, which requires the IsManager claim to exist in the
identity that accesses the protected resource. We will look at a simple controller, called ValuesController
in a typical web application or web API application.

The Controller barebones definition will be as follows:

Controller definition
[Route("api/[controller]")]
public class ValuesController : Controller
{

[HttpGet]
public IEnumerable<string> Get()
{
return new string[] { "value1", "value2" };
}
}

To authorize the web API only to users with the valid claims, we can add the following attribute in the Get
method:

Authorizing by claim
[Authorize(Policy = "IsManager")]
[HttpGet]
public IEnumerable<string> Get()
{
return new string[] { "value1", "value2" };
}

In this simple type of claims policy, only the presence of the claim, IsManager is enforced, regardless of
its value. Users without this claim type will not be able to access the controller endpoint.

The [Authorize] attribute can be placed at the level of the controller itself in the following manner:

Authorizing all APIs in a class


[Route("api/[controller]")]
[Authorize(Policy = "IsManager")]
public class ValuesController : Controller
{

[HttpGet]
public IEnumerable<string> Get()
{
return new string[] { "value1", "value2" };
}
}

Note: The assignment of user and claims-relations is defined in the AspNetUserClaims


table that was generated when you ran the Db Migration command.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-17

If anonymous access is required for a resource while other resources need to have authorization, you can
use the [AllowAnonymous] attribute in the following manner:

Excluding authorization to a single API


[Route("api/[controller]")]
[Authorize(Policy = "IsManager")]
public class ValuesController : Controller
{

[HttpGet]
public IEnumerable<string> Get()
{
return new string[] { "value1", "value2" };
}
[AllowAnonymous]
public ActionResult GetById(int Id)
{
return new string[] { "value3"};
}
}

In this example, the controller enforces the IsManager policy, so that only a user with that claim can
access the APIs, but for the API endpoint, GetById, anonymous access was granted.

Claims with values


In most cases, you will want to grant access to claims with values. For example, if your application is a
social network, you might want to restrict the usage of the app for users with claims of certain age or
birthdate. In a different scenario, you might want to grant access to sensitive data only to certain
employees within a specific department, which will involve using a claim that looks at employee ID or
employee division number.

To use claims with value, you need to register the claim with its value or value list:

Using claims with value


public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();

services.AddAuthorization(options =>
{
options.AddPolicy("HrDepartmant", policy =>
policy.RequireClaim("EmployeeId",”100”,”101”,”101”));
});
}

In this example, a policy called HrDepartment registers a claim with a key, EmpId, and possible values of
100, 101, or 102. Therefore, only employees with that claim value will be able to use the protected
resource.

Using this policy on the controller class remains as before:

Using policy
[Route("api/[controller]")]
public class ValuesController : Controller
{

[HttpGet]
[Authorize(Policy = "EmployeeId")]
MCT USE ONLY. STUDENT USE PROHIBITED
9-18 Securing Services On-premises and in Microsoft Azure

public IEnumerable<string> Get()


{
return new string[] { "value1", "value2" };
}
}

In this example, only the employees with the corresponding IDS (101,102, or 103) as their claim value will
be authorized for the Get API.

Extensibility features of ASP.NET Identity


The ASP.NET Core Identity framework comes with
an impressive set of tools and authentication
mechanisms that provide great flexibility while
choosing a security layer for your application. In
this topic, you will be introduced to some of the
more advanced aspects of the ASP.NET Core
Identity system. You will also learn how you can
leverage them in your application.

Social Network Integration


For example, now it is very common to allow users
to register for services by using their social
network credentials. This can enhance the user
experience because the users don’t have to remember many passwords and credentials. This also allows
you to get more information about your users, assuming that the user has agreed to share that
knowledge.

The following providers are supported by the ASP.NET Core Identity system:
• Microsoft
• Facebook

• Google
• Twitter

When configuring a social provider logon, in most cases, creating an app at the provider portal is one of
the preliminary steps. For example, for Twitter, you should create an app at https://apps.twitter.com/.
For Microsoft, you should create an app at https://apps.dev.microsoft.com.
After creating the app at the provider’s portal, you will get app-dedicated API tokens or/and a
combination of AppId and ClientId, which will be used later on as credentials for the authentication
service.

Recall that authentication services and configurations are configured in the ConfigureServices method in
the Startup.cs file.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-19

The following example shows configuring a Microsoft account authentication:

Configuring Microsoft account authentication


services.AddAuthentication().AddMicrosoftAccount(microsoftOptions =>
{
microsoftOptions.ClientId = Configuration["Authentication:Microsoft:ApplicationId"];
microsoftOptions.ClientSecret = Configuration["Authentication:Microsoft:Password"];
});

Your application needs to supply the corresponding tokens received when registering the app at the
social provider portal (Microsoft, in this case). In the configuration API, the ClientId field is mapped to the
Microsoft application ID and the ClientSecret field is mapped to the Microsoft password.

After you have plumbed the corresponding configuration options and run your application, every time a
user logs on to the app, the user will be redirected to Microsoft for authentication. After successful logon,
the user will be redirected back to your app.
The following example shows the Microsoft social network integration at logon.

Configuring a social network provider is not a part of this course. For further information,
follow the instructions at:
https://aka.ms/moc-20487D-m9-pg1

Email Confirmation and Recovery


Many modern security systems increase the level of trust between users and the provider by encouraging
the user to confirm the registration process via email or SMS. In other words, unless the user has
confirmed their registration (usually by clicking on a link received in an email), they cannot log on to the
application. Other scenarios that involve mail integration allows users to recover or reset their passwords
by sending a reset link or a password reminder. In both cases, your app should be able to send mails or
text messages to users.

ASP.NET Core Identity allows for easy integration with those services, although it will not actually send the
email/text message by itself. It will require you to provide a service that does the actual sending or use a
third party. To enable the email confirmation capability in your app, perform the following steps:

Change the configuration at Areas/Identity/IdentityHostingStartup.cs to require a confirmed email:


MCT USE ONLY. STUDENT USE PROHIBITED
9-20 Securing Services On-premises and in Microsoft Azure

Changing configuration to require a confirmed email


public class IdentityHostingStartup : IHostingStartup
{
public void Configure(IWebHostBuilder builder)
{
services.AddDefaultIdentity<IdentityUser>(config =>
{
config.SignIn.RequireConfirmedEmail = true;
});

//Code was dropped for brevity….

});
}
}

The next step is to configure an email service. Email service is not part of the identity system and you need
to configure it either by writing an SMTP client mechanism or by using a third-party provider such as
SendGrid. If you use a third-party provider, you will need to create an account and configure access keys.
It is possible to use the built-in System.Net.Mail to send emails. However, it requires more effort and
security measures.

The next step is to write an email sender class that implements the IEmailSender interface and contains
the logic for sending an email.
Now you can plug in the email sender class so that the identity middleware will use it. Add the following
code to Startup.cs:

Registering the IEmailSender dependency in the ConfigureServices method


public void ConfigureServices(IServiceCollection services)
{
services.Configure<CookiePolicyOptions>(options =>
{
options.CheckConsentNeeded = context => true;
options.MinimumSameSitePolicy = SameSiteMode.None;
});

services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);

// requires
// using Microsoft.AspNetCore.Identity.UI.Services;
// using WebPWrecover.Services;
services.AddSingleton<IEmailSender, EmailSender>();
services.Configure<AuthMessageSenderOptions>(Configuration);
}

Finally, prevent users from auto-logon after registration. In the previous topics, you learned that after
registration, SignInManager is used to log on users. You need to prevent auto-logon so that only users
who have confirmed their email addresses are allowed to log on.

For more information on how to enable email confirmation, follow the link at:
https://aka.ms/moc-20487D-m9-pg2

Additional Identity Features


The identity framework has many other features and capabilities that allow for robustness, flexibility, and
enhancing security. Among these features, the following are worth mentioning:
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-21

• Two-factor authentication with QR code generation. Today, many applications use two-factor
authentication, which increases the level of security by getting the users to identify themselves with at
least two sets of credentials. For example, a user-password combination with an email address or an
SMS confirmation. Another method involves generating a QR code.

• Combine local and social accounts. ASP.NET identity allows users to log on with their social account. If
the social provider service is not available, users will be allowed to log on with their local account.
• Role-based authentication. In role-based authorization, a role, such as Admin is assigned to a user or
to a group. This role can later be used for the authorization process. Similar to claims-based
authorization, the role checks are made declarative by using attributes such as [Authorize(Roles =
"Admin,DBA")].

For more information on role-based authentication, refer to “Role-based authorization in


ASP.NET Core” at:
https://aka.ms/moc-20487D-m9-pg3

• Using Different Store Providers. ASP.NET identity system is not just limited to SQL server. It is a
pluggable system that allows developers to use any storage as long as they support the common API
needed for the IdentityStore interface.

For more information on configuring a custom storage provider, go to:


https://aka.ms/moc-20487D-m9-pg4
MCT USE ONLY. STUDENT USE PROHIBITED
9-22 Securing Services On-premises and in Microsoft Azure

Lab A: Using ASP.NET Core Identity


Scenario
In this lab, you will secure an ASP.NET Core Web API with ASP.NET Core Identity.

Objectives
After you complete this lab, you will be able to:

• Use the ASP.NET Core Identity middleware.

• Authorize users for specific service.

• Test an ASP.NET Core service with the authentication and authorization process.

Lab Setup
Estimated Time: 20 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD09_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD09_LAK.md.

Exercise 1: Add ASP.NET Core Identity Middleware


Scenario
Add ASP.NET Core Identity middleware to an ASP.NET Core service.

Exercise 2: Add Authorization Code


Scenario
Implement authorization using ASP.NET Core Identity.

Exercise 3: Run a Client Application to Test the Server


Scenario
Test the service with client application
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-23

Lesson 3
Securing Services with Azure AD
Azure AD is Microsoft’s multi-tenant, cloud-based directory and identity management service. Azure AD
combines core directory services, advanced identity governance, and application access management.
Azure AD also offers a rich, standards-based platform that enables developers to deliver access control to
their applications, based on centralized policy and rules.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe the basic authentication and authorization concepts and protocols required for working with
Azure AD.
• Explain how to manage Azure AD.

• Describe the integration of .NET Core server applications with Azure AD.
• Describe Azure AD B2C.

• Describe the integration of ASP.NET Core Web API applications with Azure AD B2C.

OAuth concepts and authorization flow


Claims-based identity standards are complex and
involve advanced cryptography. To simplify the
development of applications that use claims-
based identity, you need to use infrastructure that
will not require you to implement the standards
and understand advanced cryptography.

This infrastructure has to rely on solid protocols


such as OAuth 2.0 and OpenID Connect 1.0. You
will learn about these two protocols in this lesson.

Why do we need an authorization protocol? In the


traditional client-server authentication model, the
client requests an access-restricted resource
(protected resource) on the server by authenticating with the server by using the resource owner's
credentials. To provide a third-party application access to a restricted resource, the resource owner shares
the resource’s credentials with the third-party applications. This leads to the following problems and
limitations:

• Third-party applications are required to store the resource owner's credentials for future use, typically
a password in clear-text.

• Servers are required to support password authentication, despite the security weaknesses inherent in
passwords.

• Third-party applications gain overly broad access to the resource owner's protected resources, leaving
resource owners without any ability to restrict duration or access to a limited subset of resources.

• Resource owners cannot revoke access to an individual third party without revoking access to all third
parties and must do so by changing the third party's password.
MCT USE ONLY. STUDENT USE PROHIBITED
9-24 Securing Services On-premises and in Microsoft Azure

• Compromise of any third-party application results in compromise of the user's password and all the
data protected by that password.

The solution for these problems and limitations is to force a separation between the client (web browser)
and the resource owner (user). This is done by separating the roles of the resource server and the
authorization server. The resource owner uses their credentials to authenticate against the authentication
server, which in turn provides an access token that is then used instead of the original resource owner’s
credentials. The access token represents an authorization issued to the client.
It is possible to put the solution described above into the following abstract flow:

1. The client authenticates with the authorization server and is issued an authorization grant.

2. Using the received authorization grant, the client proceeds to request an access token from an
authorization server.

3. Given that the authorization grant received from the client is valid, the authorization server generates
an access token and returns it to the client.

4. With the access token at hand, the client then sends the access token to the resource server to access
a protected resource.
5. The resource server uses the access token to access all kinds of information about the user. With the
user’s information at hand, the resource server can now decide whether to authorize access to the
requested protected resource or not. If the resource server decides to authorize access to the
resource, the protected resource is served back to the client.
The flow presented above is an abstract flow. It is abstract in the sense that the authorization grant isn’t
defined. OAuth 2.0 has various flows, which are based on the abstract flow. Each flow brings its own
implementation for the authorization grant. The most important flows that you will learn about in this
course are the Authorization Code grant flow and the Implicit grant flow. To learn about other flows that
are specified in the OAuth 2.0 specification, refer to the OAuth 2.0 website. To learn more about OAuth
2.0, refer to the documentation at the following link:

OAuth 2.0
http://go.microsoft.com/fwlink/p/?linkid=214783

OAuth 2.0 authorization code


The Authorization Code flow is one of the two
important OAuth 2.0 flows that are covered in this
module and is considered more secure. As
explained in the abstract flow, to have access to
protected resources, the client needs to eventually
receive an access token. The Authorization Code
flow is considered more secure because in this
flow, to receive the access token, the client and
the authorization server have to first exchange the
resource owner’s credentials for an authorization
grant, and only then exchange that authorization
grant for an access token.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-25

The Authorization Code flow is based on the basic OAuth flow and works as follows:

1. As mentioned in the abstract flow, the starting point is after the user is already authenticated. The
client redirects the resource owner’s user-agent to the authorization endpoint (this typically exists on
the authorization server). The client includes the following parameters in the redirection URL:

a. response_type. The value here is always code when using the Authorization Code flow.

b. client_id. A unique identifier recognized by the authorization server.

c. scope. The scope of the access request is a list of resources the client will be allowed to access. An
example is using a scope parameter to give the client access to the user’s profile.
d. redirect_uri. After the resource owner has either approved or denied the authorization request,
this is the URI to which the authorization server will redirect the resource owner’s user-agent. This
is usually a URI on the client application.

e. state. This is data shared between the authorization request and the callback invoked by the
supplied redirection URI. A good example would be to supply an application redirection URL so
the user can return to the same page he tried to access before being redirected to the
authorization server.

Sample Authorization Code request URL:

GET /authorize?response_type=code&client_id=s6BhdRkqt3&state=xyz
&redirect_uri=https://client.example.com/cb

2. On receiving the authorization request, the authorization server prompts the resource owner to
approve or deny the access request.
3. If the resource owner approves the request, the authorization server redirects the resource owner’s
user-agent to the URL specified in the redirect_uri parameter that was supplied as part of the
authorization request. As part of the redirection, the authorization server includes the authorization
code in the URL and any state provided in the authorization request.
4. Using the authorization code, the client requests an access token from the token endpoint (usually
another endpoint on the authorization server), the request consists of the following parameters:
a. grant_type. authorization_code
b. code. The authorization code received in step 3.

c. redirect_uri. The same redirect_uri provided in the authorization request.

d. client_id. The same client_id provided in the authorization request.


5. The authorization server validates the authorization code, and if valid, returns an access token.

The client includes the access token in the HTTP authorization header for every request performed against
the resource server. The resource server validates the access token and if valid, it returns the protected
resource.

Note: The Authorization Code flow is good only if the client doesn’t expose the
authorization process. This is why in browser-based client applications, the Authorization Code
flow doesn’t bring enough benefits and only complicates the authorization process.
MCT USE ONLY. STUDENT USE PROHIBITED
9-26 Securing Services On-premises and in Microsoft Azure

OAuth 2.0 implicit flow


The Implicit flow is the second authorization flow
covered in this module. In many ways, the Implicit
and Authorization Code flows are similar with one
key difference, in the Implicit flow, the
Authorization Code exchange step is skipped.
Instead, the client receives the access token by
using the resource owner’s credentials.

This is what the Implicit flow looks like:


1. Just as in all the OAuth 2.0 flows, the starting
point is after the user is already
authenticated. The client redirects the
resource owner’s user-agent to the
authorization endpoint (it typically exists on the authorization server), and the client includes the
following parameters in the redirection URL:
o response_type. The value here is always token when using the Implicit flow.

o client_id. A unique identifier recognized by the authorization server.

o scope. A scope of the access request. This is a list of resources the client will be allowed to access.
For example, using a scope parameter to give the client access to the user’s profile.

o redirect_uri. After the resource owner has either approved or denied the authorization request,
this is the URI to which the authorization server will redirect the resource owner’s user-agent. This
is usually a URI on the client application.

o state. This is data shared between the authorization request and the callback invoked by the
supplied redirection URI. A good example would be to supply an application redirection URI so
the user can return to the same page he tried to access before being redirected to the
authorization server.
2. Example Authorization Code request URL:

GET
/authorize?response_type=token&client_id=s6BhdRkqt3&state=xyz&redirect_uri=https://client.example.c
om/cb
3. Upon receiving the authorization request, the authorization server prompts the resource owner to
approve or deny the access request.

4. If the resource owner approves the request, the authorization server redirects the resource owner’s
user-agent to the URL specified in the redirect_uri parameter that was supplied as part of the
authorization request. As part of the redirection, the authorization server includes the access token in
the URL and any state provided in the authorization request.

5. The client includes the access token in the HTTP authorization header for every request performed
against the resource server. The resource server validates the access token, and if valid, it returns the
protected resource.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-27

OpenID Connect 1.0


To properly secure web applications, it is
necessary to be able to identify users and control
their access to resources. For example, in a social
network, we wouldn’t want a user to be able to
view the private posts of other users.

In the past, each application maintained its own


database of users and their credentials. This
worked when web applications were still rather
simple and the awareness of the need for
information security wasn’t as prominent as it is
today. Nowadays, the damage of a leaked identity
is huge, from exposing private email messages to
performing monetary transactions on behalf of a user without their knowledge.
The solution to this is simple. Instead of trying to host user identities on the website itself, simply use an
identity provider, such as Facebook, Google, or Microsoft that employs far superior security measures than
an individual website could possibly hope to employ.
With the rise of centralized identity providers came the need to have a standard way of handling
identities. OpenID Connect is one such protocol and is considered the modern way to handle identities.
OpenID Connect is a simple identity layer built on top of the OAuth 2.0 protocol. OAuth 2.0 defines
mechanisms to obtain and use access tokens to access protected resources, but they do not define
standard methods to provide identity information. OpenID Connect implements authentication as an
extension to the OAuth 2.0 authorization process. It provides information about the user in the form of an
id_token that verifies the identity of the user and provides basic profile information about the user.
Since OpenID Connect is based on OAuth 2.0, it also supports multiple flows, out of which this lesson will
briefly describe the OpenID Connect Authorization Code flow.

Note: In the next lesson, “Azure Active Directory B2C,” you will learn about the OAuth 2.0
Implicit flow and its OpenID Connect version.

The Authorization Code flow that uses OpenID connect is simple:

1. The client prepares an authentication request, which is very similar to the OAuth 2.0 authorization
request.

2. The client sends the request to the authorization server.

3. The authorization server authenticates the user.

4. The authorization server obtains the user’s consent or authorization.

5. The authorization server sends the user back to the client with an authorization code.

6. The client sends a request containing the authorization code to the token endpoint.

7. The client receives a response containing an access token and an ID token.

8. The client validates the ID token and uses it as a source of the user’s information.

9. From here, the flow continues in a similar fashion to the OAuth 2.0 Authorization Code flow. The
client may send the access token to the resource server to gain access to a protected resource.
MCT USE ONLY. STUDENT USE PROHIBITED
9-28 Securing Services On-premises and in Microsoft Azure

Client Libraries
Of the four client libraries, OpenID Connect has three main client libraries that are relevant to this course:

• ASP.NET Core Authentication middleware. This is the middleware for .NET Core-based server
applications that need to use an external identity provider.

• OWIN OpenID Connect middleware. This is the middleware for .NET Framework-based server
applications that need to use an external identity provider.

• MSAL.NET. Microsoft authentication library for .NET

• MSAL.JS. Microsoft authentication library for JS


This module focuses on the standard ASP.NET Core authentication middleware and MSAL.

What is Azure AD?


There are different types of identity providers
available, such as Active Directory Federation
Service 2.0 (ADFS) in the corporate world of
directory services, and Facebook and Twitter in
the social network world. Depending on the
nature of your application, you can choose the
appropriate provider.
From a business point of view, it is essential that
you choose the right identity provider. For
example, in a consumer-facing application, it
makes perfect sense to use a provider such as
Microsoft, Google, or Facebook to ease the
process of creating a user for a website while ensuring the user’s true identity. On the other hand, an
enterprise application would benefit by integrating with other business applications such as Microsoft
Exchange.

Interfacing with so many identity providers is not an easy task. Each provider can use a different protocol
and expose different claims. For this task, it is best to use an existing infrastructure (if available) rather
than trying to implement such an abstraction yourself. Azure offers exactly such an infrastructure, Azure
AD, and it is the main topic of this module and this lesson.

What is Azure AD?


Azure AD is the main identity infrastructure offered by Azure. Azure AD can be broken down into three
distinct services that make up the entire identity infrastructure:
• Azure AD service. The base for all Azure AD-related services. It is an identity provider and a directory
service.

• Azure AD B2C service. It covers user identity use cases. It is intended for customer-facing
authentication and authorization scenarios.

• Azure AD B2B collaboration. It covers B2B scenarios, such as providing access to partners into
organizational assets.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-29

This course covers the core Azure AD service and Azure AD B2C. Azure AD B2B is intentionally left out, but
you can read more about it by going to the following page:

What is Azure AD B2B collaboration?

https://aka.ms/moc-20487c-m11-pg4

As mentioned above, Azure AD is an identity provider and it supports adding and removing users. It also
exposes authentication- and authorization-related endpoints and exposes user data as tokens that contain
claims. Azure AD is also a directory service and includes the concepts of groups, memberships, role
management, and other directory features.

Azure AD exposes identities through different authentication or authorization protocols. The protocols
that this course focuses on are OAuth 2.0 and OpenID Connect 1.0, and identity data transfer over JWTs.

Note: Aside from OAuth 2.0 and OpenID Connect 1.0, Azure AD also supports WS-
Federation and identity data transfer over SAML tokens, both of which are not covered in this
course, but you can read about them by going to the following page:

WS-Federation based authentication and authorization

https://aka.ms/moc-20487c-m11-pg5

Subscriptions and Azure AD


When you sign up to Azure you automatically get a subscription entity and an Azure AD tenant. Every
resource that is created on Azure is associated with a subscription. The subscription keeps track of the
usage of different resources and from that, the end-of-month bill is calculated.

Azure AD, as mentioned above, is both an identity provider and a directory service, and so it manages
users and provides fine-grained access to different Azure resources. Every Azure AD tenant is linked to a
subscription and using that link, Azure AD can provide access management services for resources
associated with that subscription.

Managing Users in Azure AD


Every resource you create in Azure is created under a subscription. Azure AD, on the other hand, behaves
a little differently. As mentioned in the previous topic, Azure AD is both an identity provider and a
directory service, thus it has to support the management of users and provide tools for fine-grained
access control to resources.

Managing users
With Azure AD, you can easily create and remove users, edit users’ details, and reset passwords. You can
also invite users by email. This means that you have complete control of user management. Users cannot
just sign up to your directory and have access to resources.

From the description above, you can infer that the typical use case for Azure AD is to manage
organizational users. For example, a company would have an Azure AD tenant and the R&D department
would have users associated with that Azure AD tenant.
MCT USE ONLY. STUDENT USE PROHIBITED
9-30 Securing Services On-premises and in Microsoft Azure

To get started with Azure AD:

1. Open a browser, go to the Azure portal, and then, in the navigation pane, click Azure Active
Directory.
2. On the Azure Active Directory blade, click Users. This will lead you to the users management dialog.

3. On the Users blade, you can see a list of existing users (this will probably show only one user,
yourself). You can also create a new user or invite another user.

Managing access to resources


Azure AD provides directory access services in four different ways:

• Direct assignment. Users are assigned directly to an Azure resource (for example, to an app service).
• Group membership. The group is assigned to a resource and users that are members of this group
have access to the resource.

• Rule based. This is a special case of group membership. For example, all users where the department
matches R&D.

• External authority. Access to resources is controlled by an external authority. For example, access is
controlled by using data from an on-premises active directory instance.
Directory access is different from application-level access (application authorization). Directory access
means that while navigating through the portal, the directory access services will dictate which user can
access what resources and to what degree that user can manage resources. On the other hand,
application-level access controls the access to application-level resources, such as specific pages or
endpoints within the application.

This course covers group memberships because groups can be used as claims in ASP.NET applications,
and you can even use groups instead of roles to control access to specific application-level resources.

To learn more about other ways to control access by using Azure AD, go to the following page:
Azure AD Documentation – How to manage groups and members
https://aka.ms/moc-20487D-m9-pg5

Groups can be added through the Azure portal and after adding a group, users can be assigned as
members to the group. A user can be a member of multiple groups. Groups can be members of other
groups.
To manage group and memberships:

1. Open a browser, navigate to the Azure portal, and then, in the navigation pane, click Azure Active
Directory.

2. On the Azure Active Directory blade, click Groups.

3. On the Groups blade, you can create a new group, view the details of existing groups, and manage
group memberships.

Demonstration: Creating an Azure Active Directory and Users


In this demonstration, you will see how to access Azure AD, add and remove users, add and remove
groups, and associate users with groups.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-31

Demonstration Steps
You will find the steps in the “Demonstration: Creating an Azure Active Directory and Users” section on
the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/blob/master/Instructions/20487D_MOD09_DEMO.md

Registering applications in Azure AD


Azure AD really shines when used to secure
ASP.NET applications. All the topics above were
needed to understand how ASP.NET applications
can be secured by using Azure AD.

Azure AD Applications
Azure AD has a concept of applications. An Azure
AD application is a contract between an
application developed by a company and Azure
AD. Azure AD can be integrated with any kind of
application by using Azure AD applications.

An Azure AD application has the following


attributes:
• Name. The name of the application.

• Application Type. Select Native for client applications that are installed locally on a device. This
setting is used for OAuth public native clients. Select Web app / API for client
applications and resources or API applications that are installed on a secure server. This setting is used
for OAuth confidential web clients and public user-agent-based clients. The same application can also
expose both a client and resource or API.

• Sign-on URL. For web apps and API apps, provide the base URL of your app. For
example, http://localhost:31544 might be the URL for a web app running on your local machine.
Users would use this URL to sign in to a web client application.
• Redirect URI. For native applications, provide the URI used by Azure AD to return token responses.
Enter a value specific to your application; for example, http://MyFirstAADApp.

Apart from the attributes defined above, an Azure AD application also has a set of permissions; for
example, permissions to read directory data.

To connect any application to Azure AD, you need an Azure AD application. The application has to be
configured correctly both on Azure AD and on the ASP.NET application.

Configuration setup
Depending on the type of application you are building, you may or may not need to include
authentication in the application. If you are building an MVC application, you will probably need to
include a way for the user to authenticate. If you are building a set of REST APIs, the application will only
expect a valid token, meaning that the authentication part isn’t the responsibility of the application.

For simplicity, let’s assume that no authentication is required. In that case, ASP.NET Core provides an
authentication scheme called AzureADBearer. This scheme is different from the old-fashioned user-
password authentication. It requires a JWT to be provided in the authorization HTTP header. When
MCT USE ONLY. STUDENT USE PROHIBITED
9-32 Securing Services On-premises and in Microsoft Azure

ASP.NET Core receives a JWT, it validates the token against the identity provider (Azure AD) and if valid,
the user details are read from the JWT into a new ClaimsIdentity instance.

Configuration Entries
To use the AzureADBearer authentication scheme, ASP.NET Core needs to have the following parameters
defined:

• Instance. This is the URL of the identity provider instance.


• Domain. This is the specific domain acknowledged by the instance (in Azure AD).

• ClientId. This is a unique identifier of an Azure AD application registration.

• TenantId. This is the unique identifier of an Azure AD tenant.


Configuring Azure AD details for ASP.NET Core

ASP.NET Core application configuration of Azure AD


"AzureAd": {
"Instance": "https://login.microsoftonline.com ",
"Domain": "example.onmicrosoft.com",
"TenantId": "2a2a2b2f-2c2d-a2b2-2c2d-2e2fabcd5791",
"ClientId": "11111111-1111-1111-11111111111111111"
}

Using OpenID Connect middleware in ASP.NET Core and Azure AD


After registering the application in Azure AD,
implementing authentication in ASP.NET Core
application is simple by adding middleware to the
pipeline. First, add the
Microsoft.IdentityModel.Clients.ActiveDirector
y NuGet package.

Then declare the AuthPropertiesTokenCache


class that helps with managing the authentication
tokens.

The AuthPropertiesTokenCache class

Managing authentication class declaration


using Microsoft.AspNetCore.Authentication;
using Microsoft.AspNetCore.Authentication.Cookies;
using Microsoft.AspNetCore.Http;
using Microsoft.IdentityModel.Clients.ActiveDirectory;
using System;
using System.Security.Claims;

namespace WebApplication13
{
public class AuthPropertiesTokenCache : TokenCache
{
private const string TokenCacheKey = ".TokenCache";

private HttpContext _httpContext;


private ClaimsPrincipal _principal;
private AuthenticationProperties _authProperties;
private string _signInScheme;
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-33

private AuthPropertiesTokenCache(AuthenticationProperties authProperties) :


base()
{
_authProperties = authProperties;
BeforeAccess = BeforeAccessNotificationWithProperties;
AfterAccess = AfterAccessNotificationWithProperties;
BeforeWrite = BeforeWriteNotification;
}

private AuthPropertiesTokenCache(HttpContext, string signInScheme) : base()


{
_httpContext = httpContext;
_signInScheme = signInScheme;
BeforeAccess = BeforeAccessNotificationWithContext;
AfterAccess = AfterAccessNotificationWithContext;
BeforeWrite = BeforeWriteNotification;
}

public static TokenCache ForCodeRedemption(AuthenticationProperties


authProperties)
{
return new AuthPropertiesTokenCache(authProperties);
}

public static TokenCache ForApiCalls(HttpContext httpContext,


string signInScheme = CookieAuthenticationDefaults.AuthenticationScheme)
{
return new AuthPropertiesTokenCache(httpContext, signInScheme);
}

private void BeforeAccessNotificationWithProperties(TokenCacheNotificationArgs


args)
{
string cachedTokensText;
if (_authProperties.Items.TryGetValue(TokenCacheKey, out cachedTokensText))
{
var cachedTokens = Convert.FromBase64String(cachedTokensText);
Deserialize(cachedTokens);
}
}

private void BeforeAccessNotificationWithContext(TokenCacheNotificationArgs args)


{
// Retrieve the auth session with the cached tokens
var result = _httpContext.AuthenticateAsync(_signInScheme).Result;
_authProperties = result.Ticket.Properties;
_principal = result.Ticket.Principal;

BeforeAccessNotificationWithProperties(args);
}

private void AfterAccessNotificationWithProperties(TokenCacheNotificationArgs


args)
{
// if state changed
if (HasStateChanged)
{
var cachedTokens = Serialize();
var cachedTokensText = Convert.ToBase64String(cachedTokens);
_authProperties.Items[TokenCacheKey] = cachedTokensText;
}
}

private void AfterAccessNotificationWithContext(TokenCacheNotificationArgs args)


{
// if state changed
if (HasStateChanged)
MCT USE ONLY. STUDENT USE PROHIBITED
9-34 Securing Services On-premises and in Microsoft Azure

{
AfterAccessNotificationWithProperties(args);

var cachedTokens = Serialize();


var cachedTokensText = Convert.ToBase64String(cachedTokens);
_authProperties.Items[TokenCacheKey] = cachedTokensText;
_httpContext.SignInAsync(_signInScheme, _principal,
_authProperties).Wait();
}
}

private void BeforeWriteNotification(TokenCacheNotificationArgs args)


{
// if you want to ensure that no concurrent write takes place, use this
notification to place a lock on the entry
}
}
}

Configure the ASP.NET Core application to use Azure AD authentication.


Configure authentication

ASP.NET Core configuration for Azure AD authentication


public void ConfigureServices(IServiceCollection services)
{
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
services.AddAuthentication(sharedOptions =>
{
sharedOptions.DefaultScheme =
CookieAuthenticationDefaults.AuthenticationScheme;
sharedOptions.DefaultChallengeScheme =
OpenIdConnectDefaults.AuthenticationScheme;
})
.AddCookie()
.AddOpenIdConnect(OpenIdConnectDefaults.AuthenticationScheme, "AAD", o =>
{
o.ClientId = ClientId;
o.ClientSecret = ClientSecret;
o.Authority = Authority;
o.ResponseType = OpenIdConnectResponseType.CodeIdToken;
o.SignedOutRedirectUri = "/signed-out";
o.Events = new OpenIdConnectEvents()
{
OnAuthorizationCodeReceived = async context =>
{
var request = context.HttpContext.Request;
var currentUri = UriHelper.BuildAbsolute(request.Scheme,
request.Host, request.PathBase, request.Path);
var credential = new ClientCredential(ClientId, ClientSecret);
var authContext = new AuthenticationContext(Authority,
AuthPropertiesTokenCache.ForCodeRedemption(context.Properties));

var result = await


authContext.AcquireTokenByAuthorizationCodeAsync(
context.ProtocolMessage.Code, new Uri(currentUri),
credential, Resource);

context.HandleCodeRedemption(result.AccessToken, result.IdToken);
},
OnAuthenticationFailed = c =>
{
c.HandleResponse();

c.Response.StatusCode = 500;
c.Response.ContentType = "text/plain";
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-35

return c.Response.WriteAsync(c.Exception.ToString());
}
};
});
}

Add the ClientID ClientSecret authority to the application settings.

Add the authentication middleware and implement the sign-in and sign-out handlers.
Add middleware and implement sign-in and sign-out handlers

Authentication implementation
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseHsts();
}

app.UseHttpsRedirection();
app.UseMvc();

app.UseAuthentication();
app.Run(async context =>
{
if (context.Request.Path.Equals("/signin"))
{
if (context.User.Identities.Any(identity => identity.IsAuthenticated))
{
// User has already signed in
context.Response.Redirect("/");
return;
}

await context.ChallengeAsync(new AuthenticationProperties { RedirectUri = "/"


});
}
else if (context.Request.Path.Equals("/signout"))
{
await
context.SignOutAsync(CookieAuthenticationDefaults.AuthenticationScheme);
await WriteHtmlAsync(context.Response,
async response =>
{
await response.WriteAsync($"<h1>Signed out locally:
{HtmlEncode(context.User.Identity.Name)}</h1>");
await response.WriteAsync("<a class=\"btn btn-primary\"
href=\"/\">Sign In</a>");
});
}
else if (context.Request.Path.Equals("/signout-remote"))
{
await
context.SignOutAsync(CookieAuthenticationDefaults.AuthenticationScheme);
await context.SignOutAsync(OpenIdConnectDefaults.AuthenticationScheme);
}
else if (context.Request.Path.Equals("/signed-out"))
{
await WriteHtmlAsync(context.Response,
async response =>
{
MCT USE ONLY. STUDENT USE PROHIBITED
9-36 Securing Services On-premises and in Microsoft Azure

await response.WriteAsync($"<h1>You have been signed out.</h1>");


await response.WriteAsync("<a class=\"btn btn-primary\"
href=\"/signin\">Sign In</a>");
});
}
else if (context.Request.Path.Equals("/remote-signedout"))
{
await
context.SignOutAsync(CookieAuthenticationDefaults.AuthenticationScheme);
await WriteHtmlAsync(context.Response,
async response =>
{
await response.WriteAsync($"<h1>Signed out remotely:
{HtmlEncode(context.User.Identity.Name)}</h1>");
await response.WriteAsync("<a class=\"btn btn-primary\"
href=\"/\">Sign In</a>");
});
}
else
{
if (!context.User.Identities.Any(identity => identity.IsAuthenticated))
{
await context.ChallengeAsync(new AuthenticationProperties { RedirectUri =
"/" });
return;
}

await WriteHtmlAsync(context.Response, async response =>


{
await response.WriteAsync($"<h1>Hello Authenticated User
{HtmlEncode(context.User.Identity.Name)}</h1>");
await response.WriteAsync("<a class=\"btn btn-default\"
href=\"/signout\">Sign Out Locally</a>");
await response.WriteAsync("<a class=\"btn btn-default\" href=\"/signout-
remote\">Sign Out Remotely</a>");

await response.WriteAsync("<h2>Claims:</h2>");
await WriteTableHeader(response, new string[] { "Claim Type", "Value" },
context.User.Claims.Select(c => new string[] { c.Type, c.Value }));

await response.WriteAsync("<h2>Tokens:</h2>");
try
{
// Use ADAL to get the right token
var authContext = new AuthenticationContext(Authority,
AuthPropertiesTokenCache.ForApiCalls(context,
CookieAuthenticationDefaults.AuthenticationScheme));
var credential = new ClientCredential(ClientId, ClientSecret);
string userObjectID =
context.User.FindFirst("http://schemas.microsoft.com/identity/claims/objectidentifier").V
alue;
var result = await authContext.AcquireTokenSilentAsync(Resource,
credential, new UserIdentifier(userObjectID, UserIdentifierType.UniqueId));

await
response.WriteAsync($"<h3>access_token</h3><code>{HtmlEncode(result.AccessToken)}</code><
br>");
}
catch (Exception ex)
{
await response.WriteAsync($"AquireToken error: {ex.Message}");
}
});
}
});
}
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-37

private static async Task WriteHtmlAsync(HttpResponse response, Func<HttpResponse, Task>


writeContent)
{
var bootstrap = "<link rel=\"stylesheet\"
href=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css\"
integrity=\"sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u\"
crossorigin=\"anonymous\">";

response.ContentType = "text/html";
await response.WriteAsync($"<html><head>{bootstrap}</head><body><div
class=\"container\">");
await writeContent(response);
await response.WriteAsync("</div></body></html>");
}

private static async Task WriteTableHeader(HttpResponse response, IEnumerable<string>


columns, IEnumerable<IEnumerable<string>> data)
{
await response.WriteAsync("<table class=\"table table-condensed\">");
await response.WriteAsync("<tr>");
foreach (var column in columns)
{
await response.WriteAsync($"<th>{HtmlEncode(column)}</th>");
}
await response.WriteAsync("</tr>");
foreach (var row in data)
{
await response.WriteAsync("<tr>");
foreach (var column in row)
{
await response.WriteAsync($"<td>{HtmlEncode(column)}</td>");
}
await response.WriteAsync("</tr>");
}
await response.WriteAsync("</table>");
}

private static string HtmlEncode(string content) =>


string.IsNullOrEmpty(content) ? string.Empty : HtmlEncoder.Default.Encode(content);

For more information about OpenID Connect middleware, go to:


https://aka.ms/moc-20487D-m9-pg6

Demonstration: Securing an ASP.NET Core application using OpenID


Connect and AAD
In this demonstration, you will see how to authenticate to an ASP.NET Core service with Azure Active
Directory.

Demonstration Steps
You will find the steps in the “Demonstration: Securing an ASP.NET Core application using OpenID
Connect and AAD“ section on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD09_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
9-38 Securing Services On-premises and in Microsoft Azure

What is Azure AD B2C?


Azure AD B2C is a cloud identity management
solution for your web and mobile applications. It
is a highly available global service that scales to
hundreds of millions of identities. Built on an
enterprise-grade secure platform, Azure AD B2C
keeps your applications, your business, and your
customers protected.

With minimal configuration, Azure AD B2C


enables your application to authenticate:

• Social accounts such as Facebook, Google,


and LinkedIn.

• Enterprise accounts that use open standard protocols, OpenID Connect, or SAML.

• Local accounts, which include an email address and password or username and password.

Integrating enterprise accounts is supported by an advanced feature called identity experience framework
and is not covered in this module. You can learn more about it on the following page:

Identity Experience Framework


https://aka.ms/moc-20487c-m11-pg7

Creating Azure B2C Tenants


Azure AD B2C is a different product from Azure AD, and so, a B2C tenant is not supplied by default when
creating a new Azure account. To create an Azure AD B2C tenant, perform the following steps:
1. Create a new resource and search for Azure AD B2C.

2. In the Create new B2C Tenant or Link to existing Tenant blade, choose Create a new Azure AD
B2C Tenant.
3. Enter the required details and create the tenant. It should take about a minute.
4. Eventually, you will be presented with an information box. Click the link to go to the new Azure AD
B2C tenant.

As stated earlier in the module, Microsoft has two related entities—subscriptions and tenants. A tenant
without an active subscription is inactive. When a new Azure AD B2C tenant is created, it is not linked to
any subscription and needs to be manually linked to a subscription.
To link an Azure AD B2C tenant to a subscription:

1. Make sure you are on the primary Azure AD tenant, you can verify this by opening the account menu
on the top right of the portal.

2. Create a new resource and search for Azure AD B2C.

3. In the Create new B2C Tenant or Link to existing Tenant blade, choose Link an existing Azure
AD B2C Tenant to my Azure Subscription.
4. In the Azure AD B2C Resource, select Azure AD B2C Tenant, the subscription you want to link to,
fill in the resource group, and then create the link.

After linking an Azure AD B2C tenant to an active subscription, the tenant should become active as well.
Azure AD B2C has many capabilities, out of which this module covers the following:
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-39

• Registering Azure AD B2C applications

• Configuring built-in identity providers

• Managing users

• Configuring identity policies

AAD B2C as an identity provider


Just like Azure AD, Azure AD B2C is an identity
provider. Even though both products are identity
providers, they serve different purposes. While
Azure AD is an organizational identity provider,
Azure AD B2C is a user management solution.

Because of the user orientation of Azure AD B2C,


it has some features that Azure AD either doesn’t
have or aren’t demanded enough to be exposed
by the portal.

Identity Providers
While Azure AD B2C is an identity provider, it is
not the only provider. In the modern world of application development, it is extremely common to visit a
web application that allows you to sign in using your Google, Facebook, or Microsoft accounts. The
reason it is so appealing is that it doesn’t require the user to remember yet another password, all it
requires is one click and the user is logged in to the system.
For the application developer, setting up an identity provider is a relatively short process. It starts with
setting up an application on one of the identity providers and then connecting those providers to Azure
AD B2C by adding them on the identity providers blade.
The identity providers blade let you add different providers such as Google, Facebook or Microsoft.
MCT USE ONLY. STUDENT USE PROHIBITED
9-40 Securing Services On-premises and in Microsoft Azure

For the sake of simplicity, this module will only use Azure AD B2C as an identity provider, but you can
read about social providers on the pages listed below.

Configuring Social Identity providers


https://aka.ms/moc-20487c-m11-pg1
https://aka.ms/moc-20487c-m11-pg2
https://aka.ms/moc-20487c-m11-pg3

User Attributes
Azure AD B2C ships with a readymade set of attributes that can be used as part of the user’s profile. These
attributes are available to applications as claims and the user can be asked to fill them during sign-up.

Example of such attributes contain but not limited to:

• Given name
• City

• Country/Region

• Display name
• User is new

Note: As of April 2018, Azure AD B2C ships with 13 built-in user attributes.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-41

Azure AD B2C User Attributes blade

In addition to the built-in claims, it is possible to define additional attributes. On the Azure AD B2C
blade, go to the User Attributes blade. You should see a list of existing attributes.
To add a new attribute:

1. Click Add.

2. Enter a name for the new attribute. The name can include only alphanumeric characters and
underscore. The name cannot start with a number.
3. Choose a data type and enter a description.

4. Click Create.

Define a new user attribute in this blade.


MCT USE ONLY. STUDENT USE PROHIBITED
9-42 Securing Services On-premises and in Microsoft Azure

Note: As of April 2018, Azure AD B2C supports three data types of custom attributes:

• String

• Boolean
• Int

Authentication and authorization with Azure AD B2C and ASP.NET Core


Web API
After configuring Azure AD B2C as the identity
provider, perform the following steps to use it for
authentication in the ASP.NET Core application.

Add the
Microsoft.AspNetCore.Authentication.JwtBear
er NuGet package.

Configure the authentication by using JwtBearer.

Configure Azure AD B2C as JwtBearer by using


the following code.

ASP.NET Core configuration for Azure AD B2C authentication


public void ConfigureServices(IServiceCollection services)
{
services.AddAuthentication(options =>
{
options.DefaultScheme = JwtBearerDefaults.AuthenticationScheme;
})
.AddJwtBearer(jwtOptions =>
{
jwtOptions.Authority =
$"https://{Configuration["AzureAdB2C:TenantName"]}.b2clogin.com/{Configuration["AzureAdB2
C:TenantName"]}.onmicrosoft.com/{Configuration["AzureAdB2C:Policy"]}/v2.0/";
jwtOptions.Audience = Configuration["AzureAdB2C:ClientId"];
jwtOptions.Events = new JwtBearerEvents
{
OnAuthenticationFailed = AuthenticationFailed
};
});

// Add framework services.


services.AddMvc();
}

private Task AuthenticationFailed(AuthenticationFailedContext arg)


{
// For debugging purposes only!
var s = $"AuthenticationFailed: {arg.Exception.Message}";
arg.Response.ContentLength = s.Length;
arg.Response.Body.Write(Encoding.UTF8.GetBytes(s), 0, s.Length);
return Task.FromResult(0);
}
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-43

To the application settings, add the tenant name and the policy name that you created in Azure.

Use the following code to add authentication middleware.

Using authentication middleware


app.UseAuthentication();

Also, add the authentication middleware.

For more information about using Azure AD B2C in the ASP.NET Core application, go to the
following URL:

https://aka.ms/moc-20487D-m9-pg7

Demonstration: Using AAD B2C with ASP.NET Core


In this demonstration, you will see how to authenticate to an ASP.NET Core service with Azure Active
Directory B2C using social accounts.

Demonstration Steps
You will find the steps in the “Demonstration: Using AAD B2C with ASP.NET Core“ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD09_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
9-44 Securing Services On-premises and in Microsoft Azure

Lab B: Using Azure Active Directory with ASP.NET Core


Scenario
In this lab, you will secure an ASP.NET Core application using Azure Active Directory and AAD B2C.

Objectives
After you complete this lab, you will be able to:

• Use the ASP.NET Core Connect ID middleware.

• Authorize users for specific service using Azure Active DIrectory.

• Authenticate users using Azure Active Directory B2C.

Lab Setup
Estimated Time: 20 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD09_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD09_LAK.md.

Exercise 1: Authenticate a Client Application by using AAD B2C and


MSAL.js
Scenario
Implement authentication in an ASP.NET Core application using Azure Active Directory B2C.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 9-45

Module Review and Takeaways


In this module, you learned about claims-based identity and federation scenarios. You can now use claims
provided by an identity provider in ASP.NET applications.

You learned about the latest recommended industry grade authorization and authentication protocols—
OAuth 2.0 and OpenID Connect. You know your way around Azure AD and can integrate organizational
ASP.NET applications with Azure AD.

Finally, you learned about Azure AD B2C and how to provide a good authentication experience for users.
You can integrate UWP applications with both Azure AD B2C and a secured ASP.NET Web API application.

Best Practices
• Use OpenID Connect to secure your applications, both client and server.
• Use well-known identity providers such as Azure AD, Azure AD B2C, Google, and Amazon.

• Use the OAuth 2.0 authorization code grant only for trusted applications and use the implicit flow for
any untrusted applications such as user-facing applications.

Review Question
Question: What are the advantages of using claims-based identity?

Tools
• Microsoft Visual Studio 2017
• Microsoft Azure portal
MCT USE ONLY. STUDENT USE PROHIBITED
9-46 Securing Services On-premises and in Microsoft Azure
MCT USE ONLY. STUDENT USE PROHIBITED
10-1

Module 10
Scaling Services
Contents:
Module Overview 10-1

Lesson 1: Introduction to Scalability 10-2

Lab A: Load Balancing Azure Web Apps 10-8


Lesson 2: Automatic Scaling 10-9

Lesson 3: Application Gateway and Traffic Manager 10-15


Lab B: Load Balancing with Azure Traffic Manager 10-25
Module Review and Takeaways 10-26

Module Overview
Services that are successful in providing business value are likely to experience growth in the number of
users and the amount of data that they need to handle. Developers should know how to make sure that
their services can handle the increasing workload while still maintaining a high level of performance and
good user experience. You will learn about the need for scalable services and how to handle increasing
workloads by using load balancing and distributed caching.
You will learn about scaling services in cloud deployments, along with the challenges that such services
face while they are growing.

Note: The Microsoft Azure portal user interface (UI) and Azure dialog boxes in Microsoft
Visual Studio 2017 are updated frequently when new Azure components and SDKs for Microsoft
.NET are released. Therefore, it is possible that some differences will exist between screenshots
and steps shown in this module, and the actual UI you encounter in the Azure portal and Visual
Studio.

Objectives
After completing this module, you will be able to:

• Explain the need for scalability.


• Describe how to use load balancing for scaling services.

• Explain Azure Load Balancer, Azure Application Gateway, and Azure Traffic Manager.
MCT USE ONLY. STUDENT USE PROHIBITED
10-2 Scaling Services

Lesson 1
Introduction to Scalability
Scalability is a critical aspect of any service-oriented software. It has a direct impact on how users view the
reliability and trustworthiness of a service and therefore has a bearing on the business.

Load balancing is a technique that enables applications to scale and be more resilient to failure. For large-
scale, distributed applications, this is an extremely important issue.

In this lesson, you will be introduced to the two approaches for scaling large applications and understand
the required components. And also, you will learn about the different ways in which you can perform load
balancing and how to load-balance your Azure application.

Lesson Objectives
After completing this lesson, you will be able to:
• Describe the reasons that make scalability important.

• Explain the two approaches for scaling applications.

• Describe the components of a scaled-out architecture.


• Define the architectural challenges of scaling applications and services.

• Describe the tools and infrastructure required for load-balancing.

• Describe how to perform load-balancing for Azure applications.


• Scale out a web application in Azure.

The Reasons for Scaling


Scalability is a system’s ability to respond to
growing business needs in an optimal and
effective manner. It is also the ability to take
advantage of increased available resources. This is
often required for several reasons:
• A major surge in demand for a service over a
limited period of time (minutes or hours).

• A gradual rise in demand for a service over a


long period of time (weeks or months).

• An increase in the amount of data that must


be processed by the service.

A scalable system can handle such peaks and spikes in demand without any degradation in the service
quality experienced by customers. This is very important from a business perspective because it has a
direct impact on how customers perceive the reliability and trustworthiness of the service.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 10-3

Scaling Approaches
There are two different approaches for scaling
services:

• Scaling out (also known as scaling


horizontally)
• Scaling up (also known as scaling vertically)

Both approaches can be used separately or


together.

Scaling Out
To scale out, you add additional nodes to an
existing system. With the increased computing
power and decreased cost of ‘commodity’ hardware, which is hardware that is easily available to
consumers, adding more processing and storage capacity to a distributed application is a very simple
undertaking. Modern distributed applications often run on large clusters of low-cost computers that are
interconnected into a single cluster. Such applications need to be aware of the fact that they run in a
clustered environment.

Scaling Up
To scale up, you add additional resources (processing or storage) to a single node of the system. This is
often the easiest option to apply but has inherent limitations such as the maximal memory capacity or the
number of network cards that can be installed on a single computer. At this point, there is no choice but
to replace the node with a better and more capable node. Scaling up might also require the application to
scale up along with the hardware—for instance, the application must be able to take advantage of
multiple cores in a single CPU.

The Components of a Scaled-Out Architecture


A scaled-out, distributed application is not a
panacea. It requires careful planning and
integration of multiple components. Some of
these required elements are:

• Load Balancer. The load balancer has the


responsibility of routing requests to individual
nodes of the application. You perform this
routing without informing or involving the
clients. You can set up the load balancer
based on the number of requests that are
currently being processed by each node, the
node’s geographical proximity to the client.
and many other factors.
• Distributed cache. You can use a distributed cache is used to maintain an in-memory, low-latency,
read-write store that can be accessed by each node. By virtue of its distributed nature, this cache is
shared among all the nodes in the system—that is, changes made by one node are immediately
visible to all other nodes. The distributed cache is a good choice for enabling fast access to
commonly-accessed information (such as user sessions) without incurring the high I/O latency of
accessing a relational database.
MCT USE ONLY. STUDENT USE PROHIBITED
10-4 Scaling Services

• Shared configuration. You can use a shared configuration to store and administer configuration
settings in a single location. This location can then be used to automatically configure server software
on multiple nodes, such as Microsoft Internet Information Services (IIS).

• Centralized SSL Certificate Support. Centralized SSL Certificate Support, a new feature in IIS 8, allows
you to store SSL certificates in a single location. Multiple IIS-hosting nodes can then use this location
for gaining access to the certificates. This helps the administration secure distributed applications
much more easily than was previously possible.

Scale Out Architectural Challenges


Scaling out a distributed application presents
many issues that must be dealt with. Some
examples of such issues are:
• Understanding which parts of the system can
be run in parallel. One of the most important
aspects of a distributed architecture is the
ability to recognize which parts of the system
can be run in parallel and which are limited to
sequential execution. Scaling out will bring
the most benefit when it is applied to the
former rather than the latter.

• Identifying the need to scale out. A


distributed system needs to recognize when it is experiencing high load. It must then decide whether
it needs to provision additional servers or handle the increasing load in a different manner. Similarly,
a distributed system must determine when the user load is light and resources can be given up.
Finally, the system needs to report to an alert administrators when an abnormal load is detected so
that they can take the appropriate business steps.
• Performing an automated scale out. Once a system has identified the need to scale out, it must then
be able to provision additional machines and nodes in a fully automatic manner and add them to the
pool of resources available to the application. Failure to do so may result in being unable to meet
increasing load demands, which can have adverse effects on the business. This problem is
considerably simplified when running on a cloud platform, as the platform will usually contain APIs
and services that are specifically designed for such purposes.
• Dealing with failure. A distributed system, by its very nature, runs on multiple hardware elements. This
means that the probability of a hardware failure increases proportionally with the number of physical
machines that are used. Hence, the possibility of a hardware failure and its associated effect on the
application becomes a very real possibility that you must expect and plan for. In such cases, the
system needs to take steps to isolate the problem and restore full service as soon as possible. For this
also, cloud platforms offer an advantage because in these platforms hardware failures are detected
and new virtual machines are automatically provisioned.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 10-5

Transport Vs. Application Level Load Balancing


You can implement load balancing tools in either
software or hardware. Hardware load balancers
are dedicated appliances similar to routers and are
part of a data center’s core infrastructure.
Hardware load balancers are often expensive.
Software load balancing, on the other hand, can
be done by the operating system itself or by
specific applications. Often, such tools can also
serve as caches (similar to static HTML content
caches), so that frequently-accessed content can
be served directly to clients by the load balancer
rather than the back-end server.

The Windows Server range of operating systems support load balancing by using Network Load Balancing
(NLB). This is done by combining two or more computers with the same server software, such as IIS, into a
single cluster, which can then be accessed by using its own IP address, but still maintains the IP addresses
of the individual machines. The amount of traffic that each individual computer can handle is known as
load weight and you can configure the load weight for each computer. You can dynamically add or
remove machines to and from the cluster.

DNS Round Robin is an additional method of load balancing that does not require dedicated software or
hardware. When clients make calls to some domains such as www.blueyonder.com, the domain name is
resolved into a numerical IP address by using a Domain Name System (DNS) server. When using DNS
Round Robin, the DNS server resolves the domain name into a different IP address for each individual
request. The major disadvantage of this technique is that it makes the clients aware of the existence of
multiple machines.

You can also implement load balancing by using Microsoft Web Farm Framework for IIS. The Microsoft
Web Farm Framework provides load balancing, scaling, management, and provisioning solutions for IIS-
based web farms. The Microsoft Web Farm Framework also supports application-related solutions, such as
connection stickiness, and central output caching.
For additional information about the Microsoft Web Farm Framework, refer to the IIS documentation.

Web Farm Framework


http://go.microsoft.com/fwlink/?LinkID=314117
MCT USE ONLY. STUDENT USE PROHIBITED
10-6 Scaling Services

Scalability Services in Azure


Azure offers multiple load balancing solutions,
including Azure Load Balancer, Azure Application
Gateway, and Traffic Manager. Traffic Manager is
discussed in Lesson 3, “Azure Application Gateway
and Traffic Manager.”

Load Balancer is a Layer 4 Transmission Control


Protocol/User Datagram Protocol (TCP/UDP) load
balancer, which distributes traffic to a set of
endpoints. Load Balancer supports two traffic
distribution modes, which control the choice of
the target endpoint that will handle incoming
traffic:

• Hash-based distribution mode. In this mode, Load Balancer computes a hash based on the 5-tuple
that includes the source IP, source port, destination IP, destination port, and protocol type (TCP or
UDP). Packets that have the same 5-tuple are routed to the same endpoint. This guarantees that
packets belonging to the same TCP session will be handled by the same endpoint. However, if a client
creates multiple TCP sessions, the source port may change and Load Balancer may direct the traffic to
a different endpoint. This can happen when the client issues multiple HTTP requests to the same
service.

• Source IP affinity distribution mode. In this mode—a 2-tuple that includes the source IP and
destination IP, or a 3-tuple that includes the source IP, destination IP, and protocol type are used to
map traffic to the available endpoints. When using source IP affinity, packets from the same client IP
will always go to the same endpoint if they are directed to the same destination IP.
Load Balancer has additional useful features for traffic management. It can monitor the health of your
services by probing their endpoints with HTTP or TCP requests, can forward or block specific ports or
remap ports exposed externally to different ports, and more.

For more information about Load Balancer, refer:

Azure Load Balancer overview


https://aka.ms/moc-20487D-m10-pg1

Application Gateway is a Layer 7 (application) load balancer. In addition to pure HTTP load balancing,
Application Gateway supports SSL termination, URL-based routing, web application firewall, and more.
Application Gateway does not support arbitrary protocols. It works with HTTP, HTTPS, and WebSockets
traffic only.
With Application Gateway, you can route traffic to endpoints by using the following strategies:

• Round-robin routing. In this mode, each request will be routed to another instance of your service,
chosen in a round-robin fashion (for example, with three instances: the first request to instance 1, the
second request to instance 2, the third request to instance 3, the fourth request back to instance 1,
and so on).

• URL-based routing. In this mode, you can inspect the URL path components to determine which
endpoint will receive the traffic.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 10-7

You can control the load balancer’s affinity (stickiness) by using an HTTP cookie. The first response sent by
Application Gateway to a specific client can contain an HTTP cookie, which will be sent on subsequent
requests from the same client session. Application Gateway can then use this cookie to route subsequent
requests from the same session to the same service endpoint.

Application Gateway offers numerous additional features, including web application firewall (protection
against common attacks like cross-site scripting and SQL injection), SSL offloading, automatic health
monitoring, HTTP to HTTPS redirection, and more. Furthermore, you can use Application Gateway to
route traffic to non-Azure services. Any public Internet IP address can be an endpoint serviced by an
Application Gateway load balancer. As a result, you can mix on-premises service instances and Azure-
hosted service instances behind the same load balancer, achieving additional flexibility and robustness to
failure.

For more information about Application Gateway, refer:


Application Gateway Introduction
https://aka.ms/moc-20487D-m10-pg2

Note that in a load-balanced scenario, each request may arrive at a different instance. This means that any
common data, such as session information in an ASP.NET application, needs to be accessible to all
instances. You can use a database or a distributed cache for this purpose.

Another approach for load balancing is through the use of message queues: either Azure Service Bus
queues or Azure Queue storage. In this scenario, you bring up multiple worker roles that read from a
single queue. Because each instance reads a single message, the processing load is distributed across
those workers. For more details on queues, refer to module 7, "Microsoft Azure Service Bus" and module
9, "Microsoft Azure Storage."

Demonstration: Scaling Out with Microsoft Azure Web Apps


In this demonstration, you will scale a web application to multiple instances in Azure.

Demonstration Steps
You will find the steps in the “Demonstration: Scaling Out with Microsoft Azure Web Apps“ section on the
following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD10_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
10-8 Scaling Services

Lab A: Load Balancing Azure Web Apps


Scenario
In this lab, you will create an Azure Web App with multiple instances and test its load-balancer with and
without instance affinity.

Objectives
After you complete this lab, you will be able to:

• Scale your services to more than 1 instance


• Test that the service is scaled

Lab Setup
Estimated Time: 20 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD10_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD10_LAK.md.

Exercise 1: Prepare the Application for Load Balancing


Scenario
Configure the web app for multiple instances

Exercise 2: Test the Load Balancing with Instance Affinity


Scenario
Test the scaled service with instance affinity

Exercise 3: Test the Load Balancing Without Affinity


Scenario
Test the scaled service without affinity
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 10-9

Lesson 2
Automatic Scaling
Automatic scaling is a critical aspect of any service-oriented software. It has a direct impact on developers
and user experience.

In this lesson, you will learn how to configure auto-scaling.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe autoscaling rules.


• Explain how to design application for autoscaling.

• Configure autoscaling in Azure Web Apps.

Understanding Automatic Scaling Rules, Scaling Up and Down


Automatic scaling rules are built on criteria and
actions. The criteria define when to apply the rule
and the actions what will happen when a criterion
is met.
For example, the criterion can be—when the
average CPU percentage is greater than 70
percent in the last 10 minutes. If this criterion is
met, the action will be—add another instance.
Criteria are built from the following six properties:

• Time aggregation. A function to calculate the


aggregated value for the comparison.
Available options are average, minimum, maximum, total, last, and count.

• Metric name. The name of the metric that will be monitored and used for the criteria. Use metrics
according to your application’s needs. Available options are CPU percentage, memory percentage,
disk queue length, Http queue length, data in, and data out.

• Time grain statistics. These are used to reduce the noise in the metric aggregating the metric at each
minute. Available options are average, minimum, maximum, and sum.

• Operator. Used to compare the metrics. Available options are: greater than, greater than or equal to,
less than, less than or equal to, equal to, not equal to.

• Threshold. This is the numeric value to compare to. In case of percentage, the threshold will be
between 1-100.

• Duration. The amount of time in minutes to calculate the metric value by using the time aggregation
function.
MCT USE ONLY. STUDENT USE PROHIBITED
10-10 Scaling Services

An action is built from the following three properties:

• Operation. This defines how to increase or decrease the instance count. Available options are:
increase count by, increase percent by, increase count to, decrease count by, decrease percent by,
decrease count to.

• Instance count. This is the number of instances to increase or decrease.

• Cooldown. The amount of time to wait after a scale operation, before scaling again. For example, if
cooldown is 10 minutes and a scale operation just occurred, auto scale will not attempt to scale again
for 10 minutes. This is to allow the metrics to stabilize before scaling again.

Designing Applications to Support Automatic Scaling


To be able to auto-scale some applications, some
design considerations need to be treated carefully.
• Avoid instance stickiness. Stickiness is when
requests from the same client are always
routed to the same server as a result of
storing session state in memory, and using
machine-specific keys for encryption.
Stickiness limits the application's ability to
scale out. Make sure that any instance can
handle any request.

• Identify bottlenecks. Scaling out isn't a magic


fix for every performance issue. Identify and
resolve the bottlenecks in the system before throwing more instances at the problem. The database is
one of the common bottlenecks in applications.
• Decompose workloads by scalability requirements. In microservices architecture, each service has
different scaling requirements. Also, there are different kinds of scaling requirements between the
main application and the administrative application (back office).

• Offload resource-intensive tasks. To minimize the load on the servers that handle user requests, the
tasks that consume a lot of CPU or I/O resources. should be moved to background jobs when possible

• Design for scale in. When instances get removed, the application must terminate gracefully. Here are
some things that need to be handled carefully:

o Listen for shutdown events and cleanly shut down.

o Consuming a service should handle errors and use retry policy in case of error.

o For long-running tasks, consider breaking up the work and using checkpoints.

o If an instance is removed in the middle of processing, use queues so that the work can be rerun
on another instance.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 10-11

Automatic Scaling for Azure Web Apps


To automatically scale an Azure web app based
on the rules described in previous topics, perform
the following steps.

In Scale out settings, enable autoscale.

FIGURE 10.1: SCALE OUT SETTINGS


Add a new rule for scale out.

FIGURE 10.2: SCALE OUT RULE


Configure the scale out rule to scale when CPU Percentage is greater than 70 percent for 10 minutes.
MCT USE ONLY. STUDENT USE PROHIBITED
10-12 Scaling Services

FIGURE 10.3: SCALE RULE DIALOG


Add a new scale to the rule and configure it to scale in when CPU Percentage is less than 30 percent for
10 minutes.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 10-13

Fill in the name of this setting and configure the instance limits to a minimum of 1 instance and a
maximum of 5 instances, and then save the settings.
MCT USE ONLY. STUDENT USE PROHIBITED
10-14 Scaling Services

Demonstration: Configuring Automatic Scaling for Azure Web Apps


In this demonstration, you will learn how to set up a scaling rule for the same app so that if CPU is over
80% for more than 60 seconds, it should auto-scale to two instances. Test using the client script and verify
that the auto-scale rule kicks in.

Demonstration Steps
You will find the steps in the “Demonstration: Configuring Automatic Scaling for Azure Web Apps“ section
on the following page: https://github.com/MicrosoftLearning/20487D-Developing-Microsoft-Azure-and-
Web-Services/blob/master/Instructions/20487D_MOD10_DEMO.md.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 10-15

Lesson 3
Application Gateway and Traffic Manager
Scalable applications consist of multiple compute nodes, which are often distributed in multiple cloud
regions or a hybrid environment combined with on-premises nodes. Balancing traffic to these compute
nodes requires performing a set of repetitive tasks, such as determining node health and removing it from
the pool of potential nodes, routing traffic based on the client’s location, and protecting back-end nodes
from various security attacks. Earlier in this course, we discussed some of the benefits of having an API
management layer (such as Azure API Management) in front of your web service, or a general reverse
proxy (such as IIS or NGINX). In this lesson, we will discuss additional Azure services for performing global
and local load balancing of your compute nodes.

In this lesson, we will discuss Application Gateway and Traffic Manager, the two Azure services for load
balancing scalable services. By using Traffic Manager, you can distribute your service across multiple
geographic regions and route traffic according to the user’s location. By using Application Gateway, you
can perform sophisticated load balancing of HTTP, WebSockets, and HTTP/2 traffic.

Lesson Objectives
After completing this lesson, you will be able to:
• Explain the capabilities of Application Gateway and the benefits of using it.

• Configure Application Gateway backend services, including Azure App Service.

• Balance traffic with Traffic Manager DNS profiles.


• Explain how to use Traffic Manager with Application Gateway.

Application Gateway Capabilities


In distributed web services, a load balancer is a
system component that distributes traffic across
multiple compute nodes. Many load balancers
operate at the transport layer—layer 4 of the
Open Systems Interconnection (OSI) networking
model)—which means they do not understand
HTTP traffic and can make routing decisions
based only on the low-level TCP/IP protocol. Load
Balancer, which is outside the scope of this
module, is a layer 4 load balancer. It can be used,
for example, to distribute load across a set of
virtual machines running a database product, such
as Microsoft SQL Server. Although layer 4 load balancers are very flexible and can support a variety of
back-end services and protocols, more advanced scaling and traffic distribution can be achieved by
understanding the HTTP requests.

For more information about Load Balancer, go to:


https://aka.ms/moc-20487D-m10-pg4
MCT USE ONLY. STUDENT USE PROHIBITED
10-16 Scaling Services

Application Gateway is a load balancer for web services and applications that operate on layer 7 of the
OSI networking model. Rather than looking at HTTP traffic on the transport layer as plain TCP packets,
Application Gateway inherently understands HTTP requests and can route traffic based on the request
URL. For example, requests to /videos will be serviced by a pool of machines different from the rest of
your application’s traffic. Application Gateway also supports the WebSockets and HTTP/2 protocols, in
addition to HTTP (over plaintext or TLS).
In addition to providing load balancing services, Application Gateway offers some additional features:

• SSL termination. You can offload the costly decryption and encryption work from your backend web
servers and perform SSL (TLS) processing on the gateway.
• Request redirection. You can redirect specific requests to other hosts, or, very commonly, redirect all
HTTP (insecure) traffic to HTTPS.

• Web application firewall (WAF). You can protect your backend web servers from common web
application attacks by having the gateway block them.

• Reliability. You can protect your service from failures and downtime by having the gateway
automatically detect unhealthy nodes and move traffic to the healthy ones.
For more information about the types of attacks detected and mitigated by Application
Gateway’s optional Web Application Firewall, go to:
https://aka.ms/moc-20487D-m10-pg5

Application Gateway can route traffic to a number of nodes in a backend pool. Each gateway can handle
multiple pools and route traffic to nodes within the pools based on criteria that you specify. You can mix
and match numerous types of nodes, including:

• Azure Virtual Machines

• Web applications and services running on Azure App Service


• Private IP addresses in a virtual network

• Public IP addresses hosted in Azure or outside of Azure


• Fully-qualified domain names (FQDNs) hosted in Azure or outside of Azure

This ability to mix and match nodes that are hosted in different environments helps you implement
various hybrid solutions with very high degrees of reliability. For example, you can have a backup node
running in a different region, or even a different cloud provider, which will be used in case of a disaster
that affects your primary nodes. Or, you could route most of your traffic to a simple service hosted in
Azure App Service, but route more computationally-expensive requests (such as video or image encoding)
to a dedicated pool of powerful virtual machines.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 10-17

Configuring Application Gateway Backends


To configure a new Application Gateway, there
are multiple moving parts that you need to create
and attach together. Because Application Gateway
is quite flexible, there are many possible
configuration options and architectures. In most
configurations, you will need the following
components:
• Address pool (backend pool). A set of nodes
to which the gateway routes traffic.

• Front-end IP address. The IP address clients


use to send HTTP traffic to the gateway.
• Front-end port. The TCP port that clients use to send HTTP traffic to the gateway.

• Routing rules and URL path maps. Rules that specify how to route traffic to nodes.
• Probes. Health checks that automatically remove unhealthy nodes from rotation.
• HTTP listener. The actual component that routes traffic to an address pool.

• Virtual network and subnet. A set of IP addresses to host the gateway instances.
In a nutshell, when an HTTP request arrives at the gateway’s front-end IP address and port (such as
52.178.32.29:8080), a listener picks up the request and evaluates routing rules and path maps to
determine which address pool should process the request. For example, there can be a rule specifying that
requests with URLs containing /videos should be routed to a special backend pool. Unhealthy nodes are
removed from the pool using probes, so the listener can send the request to one of the healthy nodes,
wait for a response, and then send the response back to the client.
You can use the Azure portal, Azure PowerShell, or the Azure Command-Line Interface (CLI) to create and
configure Application Gateway. However, keep in mind that the Azure portal provides support only for
the simpler scenarios. For example, to add a web service hosted in Azure App Service to an Application
Gateway backend pool, you will need to use Azure PowerShell or the Azure CLI.
For more information about Application Gateway, including reference documentation for the
Azure CLI and PowerShell, go to:
https://aka.ms/moc-20487D-m10-pg6

The following screenshot shows the configuration page for creating a new application gateway. Note that
creating a gateway with only one instance means you are not covered by the Azure Service Level
Agreement (SLA).
MCT USE ONLY. STUDENT USE PROHIBITED
10-18 Scaling Services

FIGURE 10.6: CREATE APPLICATION


GATEWAY
The following screenshot shows the process of creating a new virtual network for the application gateway:

FIGURE 10.7: CREATE VIRTUAL NETWORK


DIALOG IN AZURE PORTAL
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 10-19

The following screenshot shows the end of the application gateway configuration process after a subnet
has been configured and a public IP address selected:

FIGURE 10.8: SUBNET CONFIGURATION


DIALOG IN AZURE PORTAL

Note: Creating an application gateway is a time-consuming operation because of the


underlying resources that need to be provisioned. In one of our experiments, creating an
application gateway in the West Europe region took 20 minutes.

The following screenshot illustrates how to configure the backend pool for your application gateway by
adding either Azure Virtual Machines or IP addresses to the pool:

FIGURE 10.9: APPLICATION GATEWAY


BACKEND POOL CONFIGURATION DIALOG
IN AZURE PORTAL
MCT USE ONLY. STUDENT USE PROHIBITED
10-20 Scaling Services

The following screenshot illustrates a fully-configured Application Gateway instance with a public IP
address, as shown on the Azure portal:

FIGURE 10.10: OVERVIEW DIALOG OF


APPLICATION GATEWAY IN AZURE PORTAL
The classic configuration for Application Gateway involves a backend pool comprised of multiple compute
nodes serving your service traffic. For example, you might have two virtual machines running in Azure and
another backup machine running in an on-premises data center in a single backend pool. However, when
you host web services in Azure App Service, you do not access them by IP address, but rather by an FQDN
such as blueyonderhotels.azurewebsites.net. Although you can add this FQDN directly to the backend
pool, there is an additional configuration step required. When the gateway sends traffic to your web app
in Azure App Service, it doesn’t rewrite the host header by default. You need to configure the gateway to
retrieve the hostname from Azure App Service and use it when directing traffic to Azure App Service.
The following Azure CLI commands configure the gateway to retrieve the hostname from App Service:

Configure the gateway to retrieve the hostname from App Service


RESOURCEGROUP=blueyonderhotels
GATEWAY=blueyonderhotelsgw
HTTPSETTINGS=appGatewayBackendHttpSettings
PROBE=appGatewayDefaultProbe

az network application-gateway http-settings update \


-g $RESOURCEGROUP --gateway-name $GATEWAY -n $HTTPSETTINGS \
--host-name-from-backend-pool true

az network application-gateway probe create \


-g $RESOURCEGROUP --gateway-name $GATEWAY -n $PROBE \
--protocol Http --path / --host-name-from-http-settings true

az network application-gateway http-settings update \


-g $RESOURCEGROUP --gateway-name $GATEWAY \
-n $HTTPSETTINGS --probe $PROBE

There are three steps in the preceding listing. The first updates the HTTP settings for the gateway to
retrieve the hostname from the backend pool. The second creates a new health probe and configures it to
use the hostname settings. The third command associates the health probe with the gateway’s HTTP
settings.

For more information on using Application Gateway with web applications and services
hosted in Azure App Service, go to:
https://aka.ms/moc-20487D-m10-pg7
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 10-21

Demonstration: Using an Azure Web App Behind Azure Application


Gateway
In this demonstration, you will learn how to create an Application Gateway and point to a deployed
service, then show that we can access the service through the gateway.

Demonstration Steps
You will find the steps in the “Demonstration: Using an Azure Web App Behind Azure Application
Gateway“ section on the following page: https://github.com/MicrosoftLearning/20487D-Developing-
Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD10_DEMO.md.

Creating Traffic Manager DNS Profiles


Earlier in this lesson, we discussed load balancing
solutions that actively receive and route traffic to
a collection of backend nodes. Traffic Manager is
an alternative load balancing solution, which is
based on DNS. Instead of routing all the traffic
through the load balancer, which is how
Application Gateway and Load Balancer operate,
Traffic Manager points clients to specific
endpoints and doesn’t see or process the actual
traffic.
Traffic Manager supports multiple types of
endpoints, including Azure App Service, virtual
machines, and external endpoints, which can be hosted outside of Azure. You configure Traffic Manager
with multiple endpoints (such as blueyonder-eu.azurewebsites.net and blueyonder-
us.azurewebsites.net) and a routing policy. When a client connects to your service, it sends a DNS
resolution query to the endpoint (such as blueyonder.trafficmanager.net). Traffic Manager uses its
routing policy to resolve the query to one of the endpoints and returns the result to the client. Then, the
client connects directly to the endpoint without Traffic Manager’s involvement.
Traffic Manager supports multiple routing policies:
• Performance. Traffic Manager chooses the endpoint closest to the client’s location in terms of
network latency. This is a very useful policy, which can route European traffic to your European data
center, and US traffic to the US data center. The resulting improvements in request-response latency
can be significant. (Network round-trip time over 1,000 kilometers is approximately 15ms, while
round-trip time over 10,000 kilometers is over 100ms.)

• Priority. Traffic Manager uses the primary endpoint if it is available. If the primary point fails, it will use
the secondary (backup) endpoints. You can configure the health checks that Traffic Manager will use
to determine if your service is healthy.

• Weighted. Traffic Manager will distribute traffic across all endpoints based on the weights you specify.
For example, you can route 20 percent of the traffic to one endpoint and 80 percent of the traffic to
another endpoint.

• Geographic. Traffic Manager can direct traffic to specific endpoints based on the client’s geographic
location. For example, users from the Russian Federation can be directed to on-premises servers in
the Russian Federation (to comply with local regulations), while other users from Europe and Asia will
be directed to Azure-hosted servers.
MCT USE ONLY. STUDENT USE PROHIBITED
10-22 Scaling Services

For more information on Traffic Manager routing methods, go to:


https://aka.ms/moc-20487D-m10-pg8

The following screenshot illustrates the Azure portal dialog for creating a new Traffic Manager profile:

FIGURE 10.11: CREATE TRAFFIC MANAGER


PROFILE DIALOG
The following screenshot illustrates the configuration pane for a Traffic Manager profile, including its
health checks and failover settings:

FIGURE 10.12: HEALTH CHECKS AND FAILOVER SETTINGS OF THE


CONFIGURATION PANE FOR A TRAFFIC MANAGER PROFILE
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 10-23

The following screenshot illustrates the dialog for adding a new endpoint to a Traffic Manager profile:

FIGURE 10.13: ADD ENDPOINT DIALOG OF


TRAFFIC MANAGER PROFILE
CONFIGURATION

Note: To configure a web application or service running in Azure App Service to use Traffic
Manager, you need to use the Standard SKU. Otherwise, Traffic Manager will not route traffic to
your service.

Demonstration: Using Traffic Manager With an Azure Web App in Multiple


Regions
In this demonstration, you will learn how to create a Traffic Manager in Azure portal and access the
service and see which endpoint you got.

Demonstration Steps
You will find the steps in the “Demonstration: Using Traffic Manager With an Azure Web App in Multiple
Regions“ section on the following page: https://github.com/MicrosoftLearning/20487D-Developing-
Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD10_DEMO.md.

Using Traffic Manager with Application Gateway


After learning about Traffic Manager and
Application Gateway, and gaining cursory
familiarity with Load Balancer, you might be
wondering which solution would be the best
choice for a specific system architecture. In many
cases, you shouldn’t have to choose a single
solution. In fact, it might make sense to use more
than one load balancing solution for your
application’s needs. In a fairly common
configuration, you would deploy your globally-
available application in multiple Azure regions,
and use Traffic Manager as a DNS load balancer
to direct clients to endpoints in these regions. In each region, you would have multiple nodes serving
traffic to the same logical application. So, you’d use Application Gateway as a layer 7 load balancer. In this
configuration, you gain global reliability and provide good performance to your clients by using Traffic
MCT USE ONLY. STUDENT USE PROHIBITED
10-24 Scaling Services

Manager, and you get the convenience of balancing and routing your application traffic by using
Application Gateway.

The following table summarizes the key differences between Load Balancer, Application Gateway, and
Traffic Manager. They support different protocols, operate at different layers, and allow different types of
endpoints in the backend pool.

Load Balancer Application Gateway Traffic Manager

Layer Transport (4) Application (7) DNS

Endpoints Azure VMs Any Any

Protocols Any (TCP, UDP) HTTP, Web Sockets Any

Health checks TCP/UDP HTTP GET HTTP GET

For more information on the various Azure load balancing services (Traffic Manager,
Application Gateway, and Load Balancer), and a sample architecture case study that exhibits
use cases for all three of them, go to: https://aka.ms/moc-20487D-m10-pg9
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 10-25

Lab B: Load Balancing with Azure Traffic Manager


Scenario
In this lab, you will create an Azure Web App in multiple Azure regions and use Azure Traffic Manager to
select the appropriate endpoint based on where the user is located.

Objectives
After you complete this lab, you will be able to:

• Scale your services to more than 1 instance in a different region


• Test the scale service

Lab Setup
Estimated Time: 15 minutes
You will find the high-level steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-
Services/blob/master/Instructions/20487D_MOD10_LAB_MANUAL.md.
You will find the detailed steps on the following page: https://github.com/MicrosoftLearning/20487D-
Developing-Microsoft-Azure-and-Web-Services/blob/master/Instructions/20487D_MOD10_LAK.md.

Exercise 1: Deploy an Azure Web App to Multiple Regions


Scenario
Deploy an ASP.NET Core service to two regions in Azure

Exercise 2: Create an Azure Traffic Manager Profile


Scenario
Create and configure an Azure Traffic Manager in Azure portal
MCT USE ONLY. STUDENT USE PROHIBITED
10-26 Scaling Services

Module Review and Takeaways


In this module, you learned about scaling services and applications. You learned about the benefits of
using load-balancers in Azure environments, and you learned about automatic scaling. Finally, you
learned how to use Azure Application Gateway and Azure Traffic Manager.
MCT USE ONLY. STUDENT USE PROHIBITED
Developing Microsoft Azure and Web Services 10-27

Course Evaluation
Your evaluation of this course will help Microsoft
understand the quality of your learning
experience.

Please work with your training provider to access


the course evaluation form.

Microsoft will keep your answers to this survey


private and confidential and will use your
responses to improve your future learning
experience. Your open and honest feedback is
valuable and appreciated.
MCT USE ONLY. STUDENT USE PROHIBITED
10-28 Scaling Services
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
 

 
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
 

 
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
 

 
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
 

 
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
 

 
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
 

 
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
 

 
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
 

 
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
 

 
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
 

 
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
 

 
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
 

 
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
 

 
MCT USE ONLY. STUDENT USE PROHIBITED
Notes
 

You might also like