Professional Documents
Culture Documents
2UFLDO
Course
AZ-400T05
Implementing Application
Infrastructure
OHIBPROHIBITED 206 Module 3 Azure Compute services
Labs Overview
ITED
1. If you already have a Microsoft account that you have not already used to sign up for a free Azure trial
subscription, you’re ready to get started. If not, don’t worry, you can create a new
.
2. After you’ve created a Microsoft account, create your free account. (If you’re not
already signed into your Microsoft account, you'll need to sign in now.)
STUDENT
●● Enter your cellphone number and have Microsoft send you a text message to verify your identity.
●● Enter the code you have been sent to verify your number.
●● Provide valid payment details. This is required for verification purposes only; your credit card won’t
be charged for any services you use during the trial period, and the account is automatically
deactivated at the end of the trial unless you explicitly decide to keep it active. For more informa-
STUDENT
Like many other cloud infrastructure platforms today, Azure is continuously developing updates to their
services and components. If you've had your own subscriptions for any length of time, you're already
aware that changes to services happen much more rapidly than with more traditional application deploy-
ment models.
ONLY.
Every effort will be made to update course content when there are major changes to product functionali-
ty. However, there will be occasions where course content does not exactly match the latest version of
the product. In most cases, you should still be able to understand and complete the tasks. The general
guidance from the Azure documentation teams is to check the documentation frequently to see what
upcoming notifications have been posted or where documentation has been updated to reflect the latest
ONLY.
changes.
We encourage you to consult for the latest information. From there, you can research
blogs and other provided resources to help you stay current in a cloud-enabled world.
USE
T USE
https://aka.ms/edx-devops200.4x-msa
MCT
https://aka.ms/edx-devops200.4x-az2
https://azure.microsoft.com/en-us/free/free-account-faq/
https://azure.microsoft.com/en-us/updates/
MC
Lab 207
AZ-400T05-M03-Lab Tasks
●●
●●
http://microsoft.github.io/PartsUnlimited/iac/200.2x-IaCDeployApptoAppServices.html
http://microsoft.github.io/PartsUnlimited/iac/200.2x-IaCDeployApptoAKS.html
OHIBPROHIBITED 208 Module 3 Azure Compute services
Which of the following Azure products provides management capabilities for applications that run across
multiple Virtual Machines, and allows for the automatic scaling of resources, and load balancing of traffic?
Azure Service Fabric
Virtual Machine Scale Sets
Azure Kubernetes Service
USE PRUSE
Virtual Network
Update Domains
Azure AD Domain Services
Fault Domains
Event Domains
STUDENT
Complete the following sentence: Azure App Service is an Azure Platform-as-Service offering that is used
for... ?
processing events with serverless code.
detecting, triaging, and diagnosing issues in your web apps and services.
ONLY.
building, testing, releasing, and monitoring your apps from within a single software application.
hosting web applications, REST APIs, and mobile back ends.
ONLY.
Deploys containerized applications using Docker Hub, Azure Container Registry, or private registries.
Incrementally deploys apps into production with deployment slots and slot swaps.
T USE
Supports PowerShell and Win-RM for remotely connecting directly into your containers.
MC
Module Review Questions 209
Which of the following are network models for deploying a clusters in Azure Kubernetes Service (AKS)?
(choose two)
Basic Networking
Native Model
Advanced Networking
Resource Model
Which of the following cloud service models provides the most control, flexibility, and portability?
Infrastructure-as-a-Service (IaaS)
Functions-as-a-Service (FaaS)
Platform-as-a-Service (PaaS)
T USE
MCT
MC ONLY.
USE USE PRUSE
STUDENT
STUDENT
ONLY. ITED
Module Review Questions 211
Which of the following Azure products provides management capabilities for applications that run across
multiple Virtual Machines, and allows for the automatic scaling of resources, and load balancing of
traffic?
Azure Service Fabric
■■ Virtual Machine Scale Sets
Azure Kubernetes Service
Virtual Network
Explanation
Virtual Machine Scale Sets is the correct answer.
All other answers are incorrect.
Azure Service Fabric is for developing microservices and orchestrating containers on Windows or Linux.
Azure Kubernetes Service (AKS) simplifies the deployment, management, and operations of Kubernetes.
Virtual Network is for setting up and connecting virtual private networks.
With Azure VMs, scale is provided for by Virtual Machine Scale Sets (VMSS). Azure VMSS let you create and
manage groups of identical, load balanced VMs. The number of VM instances can increase or decrease
automatically, in response to demand or a defined schedule. Azure VMSS provide high availability to your
applications, and allow you to centrally manage, configure, and update large numbers of VMs. With Azure
VMSS, you can build large-scale services for areas such as compute, big data, and container workloads.
Complete the following sentence: Azure App Service is an Azure Platform-as-Service offering that is used
for... ?
processing events with serverless code.
detecting, triaging, and diagnosing issues in your web apps and services.
building, testing, releasing, and monitoring your apps from within a single software application.
ITED
Insights.
Building, testing, releasing, and monitoring your apps from within a single software application is performed
by Visual Studio App Center.
Azure App Service is a Platform as Service offering on Azure, for hosting web applications, REST APIs, and
mobile back ends. With Azure App Service you can create powerful cloud apps quickly within a fully
managed platform. You can use Azure App Service to build, deploy, and scale enterprise-grade web, mobile,
STUDENT
and API apps to run on any platform. Azure App Service ensures your application meet rigorous perfor-
mance, scalability, security and compliance requirements, and benefit from using a fully managed platform
for performing infrastructure maintenance.
■■ Supports PowerShell and Win-RM for remotely connecting directly into your containers.
Explanation
All of the answers are correct.
Web App for Containers from the Azure App Service allows customers to use their own containers, and
deploy them to Azure App Service as a web app. Similar to the Azure Web App solution, Web App for
ONLY.
crucial to web app development and management. Web App for Containers provides an ideal environment
to run web apps that do not require extensive infrastructure control.
MCT
MC
Module Review Questions 213
Which of the following are network models for deploying a clusters in Azure Kubernetes Service (AKS)?
(choose two)
■■ Basic Networking
Native Model
■■ Advanced Networking
Resource Model
Explanation
Basic Networking and Advanced Networking are correct answers.
Native Model and Resource Model are incorrect answers because these are two deployment models
supported by Azure Service Fabric.
PR ONLY.
In AKS, you can deploy a cluster to use either Basic Networking or Advanced Networking. With Basic
Networking, the network resources are created and configured as the AKS cluster is deployed. Basic Net-
working is suitable for small development or test workloads, as you don't have to create the virtual network
and subnets separately from the AKS cluster. Simple websites with low traffic, or to lift and shift workloads
into containers, can also benefit from the simplicity of AKS clusters deployed with Basic Networking.
With Advanced Networking, the AKS cluster is connected to existing virtual network resources and configu-
ED
rations. Advanced Networking allows for the separation of control and management of resources. When you
NT USE OHIBT
use Advanced Networking, the virtual network resource is in a separate resource group to the AKS cluster.
USE . Y ONL STUDEUSE
For most production deployments, you should plan for and use Advanced Networking.
MCT
MCT
Module Review Questions 215
Which of the following cloud service models provides the most control, flexibility, and portability?
■■ Infrastructure-as-a-Service (IaaS)
Functions-as-a-Service (FaaS)
Platform-as-a-Service (PaaS)
Explanation
Infrastructure-as-a-Service (IaaS) is the correct answer.
Functions-as-a-Service (FaaS) and Platform-as-a-Service (PaaS) are incorrect answers.
Of the three cloud service models mentioned, IaaS provides the most control, flexibility, and portability.
FaaS provides simplicity, elastic scale, and potential cost savings, because you pay only for the time your
code is running. PaaS falls somewhere between the two.
Module 4 Third Party and Open Source Tool
integration with Azure
Lesson Overview
This lesson includes the following topics:
●● What is Chef
●● Chef Automate
●● Chef Cookbooks
●● Chef Knife command
What is Chef
Chef is an infrastructure automation tool that you use for deploying, configuring, managing, and ensuring
compliance of applications and infrastructure. It provides for a consistent deployment and management
experience.
Chef helps you to manage your infrastructure in the cloud, on-premises, or in a hybrid environment by
using instructions (or recipes) to configure nodes, a node , or chef-client, being any machine, physical or
virtual, cloud or network device that is under management by Chef.
The following diagram is of the high-level Chef architecture:
OHIBPROHIBITED
USE PRUSE
STUDENT
STUDENT ITED 218 Module 4 Third Party and Open Source Tool integration with Azure
●● Chef Server. This is the management point, which has two options for the Chef Server: a hosted
solution, and an on-premises solution.
●● Chef Client (node). This is a Chef agent that resides on the servers you are managing.
●● Chef Workstation. This is the Admin workstation where you create policies and execute management
ONLY.
commands. You run the command from the Chef Workstation to manage your infrastructure.
There are also the concepts of Chef cookbooks and Recipes. These are essentially the policies that you
define and apply to your servers.
USE
Chef Automate
You can deploy Chef on Microsoft Azure from the Azure Marketplace using the Chef Automate image.
T USE
Chef Automate is a Chef product that allows you to package and test your applications, and provision and
update your infrastructure. Using Chef, you can manage all of it with compliance and security checks, and
MCT
The Chef Automate image is available on the Azure Chef Server and has all the functionality of the legacy
Chef Compliance server. You can build, deploy, and manage your applications and infrastructure on
Azure. Chef Automate is available from the Azure Marketplace, and you can try it out with a free 30-day
license. You can deploy it in Azure straight away.
Chef Automate integrates with the open-source products Chef, InSpec, and Habitat, and their associated
tools, including chef-client and ChefDK. The following image is an overview of the structure of Chef
Automate, and how it functions.
●● Habitat. Habitat is an open-source project that offers an entirely new approach to application
management. It makes the application and its automation the unit of deployment by creating plat-
form-independent build artifacts that can run on traditional servers and virtual machines (VMs). They
also can be exported into your preferred container platform, enabling you to deploy your applications
in any environment. When applications are wrapped in a lightweight “habitat” (the runtime environ-
ment), whether the habitat is a container, a bare metal machine, or platform as a service (PaaS) is no
longer the focus and does not constrain the application.
For more information about Habitat, see the page.
●● InSpec. InSpec is a free and open-source framework for testing and auditing your applications and
infrastructure. InSpec works by comparing the actual state of your system with the desired state that
you express in easy-to-read and easy-to-write InSpec code. InSpec detects violations and displays
findings in the form of a report, but puts you in control of remediation.
You can use InSpec to validate the state of your VMs running in Azure. You can also use InSpec to scan
and validate the state of resources and resource groups inside a subscription.
https://docs.microsoft.com/en-us/azure/chef/chef-habitat-overview
OHIBPROHIBITED 220 Module 4 Third Party and Open Source Tool integration with Azure
Chef Cookbooks
Chef uses a cookbook to define a set of commands that you execute on your managed client. A cookbook
is a set of tasks that you use to configure an application or feature. It defines a scenario and everything
ITED
required to support that scenario. Within a cookbook, there are a series of recipes, which define a set of
actions to perform. Cookbooks and recipes are written in the Ruby language.
After you create a cookbook, you can then create a Role. A Role defines a baseline set of cookbooks and
attributes that you can apply to multiple servers. To create a cookbook, you use the
command.
USE PRUSE
Before creating a cookbook, you first configure your Chef Workstation by setting up the Chef Develop-
ment Kit on your local workstation. You'll use the Chef workstation to connect to and manage your Chef
server.
: You can download and install the Chef Development Kit from
STUDENT
.
Choose the Chef Development Kit that is appropriate to your operating system and version. For example:
●● macOSX/macOS
●● Debian
●● Red Hat Enterprise Linux SUSE
STUDENT
1. This command generates a set of files under the directory . Next, you
need to define the set of commands that you want the Chef client to execute on your managed virtual
ONLY.
action :run
MCT
https://docs.microsoft.com/en-us/azure/chef/chef-inspec-overview
https://downloads.chef.io/chefdk
MC
MCT USE ONLY. STUDENT USE PROHIBITED
Chef with Azure 221
end
service 'w3svc' do
end
template 'c:\inetpub\wwwroot\Default.htm' do
source 'Default.htm.erb'
end
●● Upload your cookbooks and recipes to the Chef Automate server using the following command:
OHIBPROHIBITED 222 Module 4 Third Party and Open Source Tool integration with Azure
●● Create a role to define a baseline set of cookbooks and attributes that you can apply to multiple
servers. Use the following command to create this role:
knife role create < role name >
ITED
●● Bootstrap the a node or client and assign a role using the following command:
knife bootstrap < FQDN-for-App-VM > --ssh-user <app-admin-username>
--ssh-password <app-vm-admin-password> --node-name < node name > --run-
list role[ < role you defined > ] --sudo --verbose
You can also bootstrap Chef VM extensions for the Windows and Linux operating systems, in addition to
USE PRUSE
provisioning them in Azure using the command. For more information, look up the ‘cloud-api’
bootstrap option in the Knife plugin documentation at .
: You can also install the Chef extensions to an Azure VM using Windows PowerShell. By installing
the Chef Management Console, you can manage your Chef server configuration and node deployments
via a browser window.
T USE
MCT ONLY.
USE STUDENT
STUDENT
ONLY.
https://github.com/chef/knife-azure
MC
Puppet with Azure 223
Lesson Overview
This lesson includes the following topics:
●● What is Puppet
●● Deploying Puppet in Azure
●● Manifest files
What is Puppet
Puppet is a deployment and configuration management toolset that provides you with enterprise tools
that you need to automate an entire lifecycle on your Azure infrastructure. It also provides consistency
and transparency into infrastructure changes.
Puppet provides a series of open-source configuration management tools and projects. It also provides
Puppet Enterprise, which is a configuration management platform that allows you to maintain state in
both your infrastructure and application deployments.
Puppet operates using a client server model, and consists of the following core components:
●● Puppet Master. The Puppet Master is responsible for compiling code to create agent catalogs. It's also
where Secure Sockets Layer (SSL) certificates are verified and signed. Puppet Enterprise infrastructure
components are installed on a single node, the master. The master always contains a compile master
and a Puppet Server. As your installation grows, you can add additional compile masters to distribute
the catalog compilation workload.
●● Puppet Agent. Puppet Agent is the machine (or machines) managed by the Puppet Master. An agent
that is installed on those managed machines allows them to be managed by the Puppet Agent.
●● Console Services. Console Services are the web-based user interface for managing your systems.
●● Facts. Facts are metadata related to state. Puppet will query a node and determine a series of facts,
which it then uses to determine state.
https://azure.microsoft.com/en-us/marketplace/
MCT USE ONLY. STUDENT USE PROHIBITED 224 Module 4 Third Party and Open Source Tool integration with Azure
After you select it, you need to fill in the VM's parameter values. A preconfigured system will then run
and test Puppet, and will preset many of the settings. However, these can be changed as needed. The VM
will then be created, and Puppet will run the install scripts.
Another option for creating a Puppet master in Azure is to install a Linux VM in Azure and deploy the
Puppet Enterprise package manually.
Manifest files
Puppet uses a declarative file syntax to define state. It defines what the infrastructure state should be, but
not how it should be achieved. You must tell it you want to install a package, but not how you want to
install the package.
Configuration or state, is defined in manifest files known as Puppet Program files. These files are responsi-
ble for determining the state of the application, and have the file extension .
Puppet program files have the following elements:
●● . A bucket that you put resources into. For example, you might have an class with
everything required to run Apache (such as the package, config file. running server, and any users that
need to be created). That class then becomes an entity that you can use to compose other workflows.
●● . A single elements of your configuration that you can specify parameters for.
●● . The collection of all the classes, resources, and other elements of the Puppet program file in
a single entity.
class mrpapp {
class { 'configuremongodb': }
class { 'configurejava': }
}
class configuremongodb {
include wget
class { 'mongodb': }->
wget::fetch { 'mongorecords':
source => 'https://raw.githubusercontent.com/Microsoft/PartsUnlimitedM-
RP/master/deploy/MongoRecords.js',
destination => '/tmp/MongoRecords.js',
timeout => 0,
}->
exec { 'insertrecords':
command => 'mongo ordering /tmp/MongoRecords.js',
path => '/usr/bin:/usr/sbin',
unless => 'test -f /tmp/initcomplete'
MCT USE ONLY. STUDENT USE PROHIBITED
Puppet with Azure 225
}->
file { '/tmp/initcomplete':
ensure => 'present',
}
}
class configurejava {
include apt
$packages = ['openjdk-8-jdk', 'openjdk-8-jre']
apt::ppa { 'ppa:openjdk-r/ppa': }->
package { $packages:
ensure => 'installed',
}
}
: You can download customer Puppet modules that have been created by Puppet and the Puppet
community from .
Puppetforge is a community repository that contains thousands of modules for download and use, or
modification as you need. This saves you the time necessary to recreate modules from scratch.
https://forge.puppet.com/
OHIBPROHIBITED 226 Module 4 Third Party and Open Source Tool integration with Azure
Lesson Overview
This lesson includes the following topics:
●● What is Ansible
ITED
●● Ansible components
●● Installing Ansible
●● Ansible on Azure
●● Playbook structure
●● Run Ansible in Azure Cloud Shell
USE PRUSE
What is Ansible
Ansible is an open-source platform that automates cloud provisioning, configuration management, and
application deployments. Using Ansible, you can provision VMs, containers, and complete cloud infra-
STUDENT
structure. In addition to provisioning and configuring applications and their environments, Ansible allows
you to automate deployment and configuration of resources in your environment such as virtual net-
works, storage, subnets, and resources groups.
Ansible is designed for multi-tier deployments. Unlike Puppet or Chef, Ansible is agentless, so you do not
have to install software on the managed machines.
STUDENT
Ansible also models your IT infrastructure by describing how all of your systems interrelate, rather than
managing just one system at a time.
Ansible Components
The following workflow and component diagram outlines how playbooks can be run in different circum-
stances, one after another. In the workflow, Ansible playbooks:
ONLY.
1. Provision resources. Playbooks can provision resources. In the following diagram, playbooks create
load-balancer virtual networks, network security groups, and VM scale sets on Azure.
2. Configure the application. Playbooks can deploy applications to run particular services, such as
installing Tomcat on a Linux machine to allow you to run a web application.
3. Manage future configurations to scale. Playbooks can alter configurations by applying playbooks to
ONLY.
existing resources and applications, in this instance to scale the virtual machines.
In all cases Ansible makes use of core components such as Roles, modules, APIs, plugins, inventory and
other components.
MCT
MC USE
T USE
Ansible with Azure 227
Ansible models your IT infrastructure by describing how all of your systems inter-relate, rather than just
managing one system at a time. The core components of Ansible are as follows:
●● . This is the machine from which the configurations are run. It can be any machine
with Ansible installed on it. However, it requires Python2 or Python 3 to be installed on the control
machine as well. You can have multiple control nodes, laptops, shared desktops, and servers all
running Ansible.
●● . These are the devices and/or machines and environments that are being managed.
Managed nodes can also be referred to as hosts. Ansible is not installed on nodes.
●● . Playbooks are ordered lists of tasks that have been saved so you can run them in the
same order repeatedly. Playbooks are Ansible’s language for configuration, deployment, and orches-
tration. They can describe a policy you want your remote systems to enforce, or they can dictate a set
of steps in a general IT process.
●● When you create a playbook, you do so using YAML, which defines a model of a configuration or
process, and uses a declarative model. Elements such as , , and reside within play-
books.
●● . Ansible works by connecting to your nodes and then pushing out to the nodes small
programs (or units of code), called modules. Modules are the units of code that define the configura-
tion. They are modular, and can be re-used across playbooks. They represent the desired state of the
system (declarative), are executed over SSH by default, and are removed when finished.
●● A playbook is typically made up of many modules. For example, you could have one playbook
containing three modules: a module for creating an Azure Resource group, a module for creating a
virtual network, and a module for adding a subnet.
●● Your library of modules can reside on any machine, and do not require any servers, daemons, or
databases. Typically, you’ll work with your favorite terminal program, a text editor, and most likely a
OHIBPROHIBITED 228 Module 4 Third Party and Open Source Tool integration with Azure
version control system to keep track of changes to your content. A complete list of available modules
is available on Ansible's page.
●● You can find details of the Ansible modules that are available for Azure on GitHub at
and also preview Ansible Azure
modules on the page.
●● . Inventory is a list of managed nodes. Ansible represents what machines it manages using a
.INI file that puts all of your managed machines in groups of your own choosing. When adding new
ITED
machines, you do not need to use additional SSL-signing servers, thus avoiding Network Time
Protocol (NTP) and Domain Name System (DNS) issues. You can create the inventory manually, or for
Azure, Ansible supports dynamic inventories, which means that the host inventory is dynamically
generated at runtime. Ansible supports host inventories for other managed hosts as well.
●● . Roles are predefined file structures that allow automatic loading of certain variables, files, tasks,
and handlers, based on the file's structure. It allows for easier sharing of roles. You might, for example,
USE PRUSE
Installing Ansible
To enable a machine to act as the control machine from which to run playbooks, you need to install both
Python and Ansible.
STUDENT
When you install Python, you must install either Python 2 (version 2.7), or Python 3 (versions 3.5 and
higher). You can use pip, the Python package manager, to install Python, or you can use other installation
methods.
ONLY.
You can install Ansible in many different distributions of Linux, including, but not limited to, those in the
following list:
T USE
https://docs.ansible.com/ansible/latest/modules/list_of_all_modules.html
https://github.com/ansible/ansible/tree/devel/lib/ansible/modules/cloud/azure
https://galaxy.ansible.com/Azure/azure_preview_modules
MC
MCT USE ONLY. STUDENT USE PROHIBITED
Ansible with Azure 229
●● CentOS
●● Debian
●● Ubuntu
●● Fedora
Note: Fedora is not supported as an endorsed Linux distribution on Azure. However, it can be run on
Azure by uploading your own image. All other Linux distributions are supported on Azure as endorsed by
Linux.
You can use the appropriate package manager software to install Ansible and Python, such as yum, apt,
or pip. For example, To install Ansible on Ubuntu, run the following commands:
## Install pre-requisite packages
sudo apt-get update && sudo apt-get install -y libssl-dev libffi-dev py-
thon-dev python-pip
You can also install Ansible and Python on macOS, and use that environment as the control machine.
You cannot install Ansible on the Windows operating system. However, you can run playbooks from a
Windows machine by utilizing other products and services. You can install Ansible and Python on operat-
ing systems such as:
●● Windows Subsystem for Linux. Windows Subsystem for Linux is an Ubuntu Linux environment availa-
ble as part of Windows.
●● Azure Cloud Shell. Use Azure Cloud Shell via a web browser on a Windows machine.
●● Microsoft Visual Studio Code. Using Visual Studio Code, choose one of the following options:
When Ansible manages remote machines, it does not leave software installed or running on them.
Therefore, there’s no real question about how to upgrade Ansible when moving to a new version.
OHIBPROHIBITED 230 Module 4 Third Party and Open Source Tool integration with Azure
page.
Ansible on Azure
There are a number of ways you can use Ansible in Azure.
USE PRUSE
You can use one of the following images available as part of the Azure Marketplace:
●● Red Hat Ansible on Azure is available as an image on Azure Marketplace, and it provides a fully
configured version. This enables easier adoption for those looking to use Ansible as their provisioning
and configuration management tool. This solution template will install Ansible on a Linux VM along
STUDENT
●● Ansible Tower (by Red Hat). Ansible Tower by Red Hat helps organizations scale IT automation and
manage complex deployments across physical, virtual, and cloud infrastructures. Built on the proven
open-source Ansible automation engine, Ansible Tower includes capabilities that provide additional
levels of visibility, control, security, and efficiency necessary for today's enterprises. With Ansible
Tower you can:
●● Provision Azure environments with ease using pre-built Ansible playbooks.
ONLY.
don't currently have a subscription, you can obtain one directly from Red Hat.
USE
Another option for running Ansible on Azure is to deploy a Linux VM on Azure virtual machines, which is
infrastructure as a service (IaaS). You can then install Ansible and the relevant components, and use that
as the control machine.
T USE
MCT
https://docs.microsoft.com/en-us/azure/virtual-machines/linux/ansible-install-configure?toc=%2Fen-us%2Fazure%2Fansible%2Ftoc.
json&bc=%2Fen-us%2Fazure%2Fbread%2Ftoc.json
MC
Ansible with Azure 231
: The Windows operating system is not supported as a control machine. However, you can run
Ansible from a Windows machine by utilizing other services and products such as Windows Subsystem
for Linux, Azure Cloud Shell, and Visual Studio Code.
For more details about running Ansible in Azure, visit:
●● website.
●● .
Playbook structure
Playbooks are the language of Ansible's configurations, deployments, and orchestrations. You use them
to manage configurations of and deployments to remote machines. Playbooks are structured with YAML
(a data serialization language), and support variables. Playbooks are declarative and include detailed
information regarding the number of machines to configure at a time.
Yaml is based around the structure of key value pairs. In the following example, Key is name, and the
name is mynamevalue:
name: mynamevalue
Indentations and new lines are used to separate key value pairs.
In the YAML syntax there is no definition on how to space indentation. You can indent as many spaces as
you want. However, the indentations must be uniform throughout the file, and commands must occur at
the same level of indentation spaces.
When there is indentation in YAML, that identified value is the value for the parent key. If your parent key
already has a value, then you cannot indent.
https://docs.microsoft.com/en-us/azure/ansible/?ocid=AID754288&wt.mc_id=CFID0352
https://docs.ansible.com/ansible/latest/scenario_guides/guide_azure.html
USEPROHIBITED
PROHIBITE
232 Module 4 Third Party and Open Source Tool integration with Azure
You can also check the syntax of a playbook using the following command.
ansible-playbook --syntax-check
This will run the playbook file through the parser to ensure its included items such as files and roles, and
it has no syntax problems. You can also use the --verbose command.
●● To see what hosts would be affected by running a playbook you can run the command:
ansible-playbook playbook.yml --list-hosts
STUDENTUSE
The following code is a sample playbook that will create a Linux virtual machine in Azure:
ONLY.STUDENT
name: myVnet
address_prefixes: "10.0.0.0/16"
- name: Add subnet
azure_rm_subnet:
resource_group: myResourceGroup
name: mySubnet
address_prefix: "10.0.1.0/24"
virtual_network: myVnet
- name: Create public IP address
MCT USE
azure_rm_publicipaddress:
resource_group: myResourceGroup
allocation_method: Static
name: myPublicIP
register: output_ip_address
- name: Dump public IP for VM which will be created
debug:
MCT
Azure Cloud Shell, has Ansible preinstalled. After you are signed into Azure Cloud Shell, specify the bash
console. (You do not have to install or configure anything to be able or run Ansible.)
https://github.com/Azure-Samples/ansible-playbooks
MCT USE ONLY. STUDENT USE PROHIBITED 234 Module 4 Third Party and Open Source Tool integration with Azure
You can also use the editor included with Azure Cloud Shell to view, open, and edit your playbook .yml
files. You can open the editor by clicking on the curly brackets icon in the taskbar at the top of Azure
Cloud Shell.
Ansible with Azure 235
The following steps outline how to create a resource group in Azure using Ansible in Azure Cloud Shell
with bash:
1. Go to the Azure Cloud Shell at , or launch Azure Cloud Shell from within the
Azure portal by clicking on the Azure PowerShell icon in the taskbar's top, left corner.
2. Authenticate to Azure by entering your credentials if prompted.
3. Ensure is selected as the shell, in the taskbar's top, left corner.
4. Create a new file using the following command:
vi rg.yml
10. Verify that you receive output similar to the following code:
PLAY [localhost] **********************************************************
***********************
https://shell.azure.com
OHIBPROHIBITED 236 Module 4 Third Party and Open Source Tool integration with Azure
"state": {
"id": "/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/
resourceGroups/ansible-rg",
"location": "eastus",
"name": "ansible-rg",
"provisioning_state": "Succeeded",
"tags": null
USE PRUSE
}
}
}
11. Open Azure portal and verify the resource group now displays in the portal.
You can also run playbooks on a Windows machine using Visual Studio Code. This leverages
other services that can also be integrated using Visual Studio Code.
Complete the following steps to create network resources in Azure using Visual Studio Code:
ONLY.
1. If not already installed, install Visual Studio Code by downloading it from the
page. You can install it on the Windows, Linux, or macOS operating systems.
2. Go to > > .
3. Search for and install the extension .
T USE
MCT ONLY.
USE
https://code.visualstudio.com/
MC
MCT USE ONLY. STUDENT USE PROHIBITED
Ansible with Azure 237
4.
5. Search for and install the extension .
MCT USE ONLY. STUDENT USE PROHIBITED 238 Module 4 Third Party and Open Source Tool integration with Azure
6.
7. You can also view details of this extension on the Visual Studio Marketplace page.
8. In Visual Studio Code, go to > . Alternatively, you can select the
(cog) icon in the bottom, left corner of the window, and then select
.
9.
10. In the Command Palette, Type , select** Azure:Sign in**.
https://marketplace.visualstudio.com/items?itemName=vscoss.vscode-ansible&ocid=AID754288&wt.mc_id=CFID0352
MCT USE ONLY. STUDENT USE PROHIBITED
Ansible with Azure 239
11.
12. When a browser launches and prompts you to sign in, select your Azure account. Verify that a mes-
sage displays stating that you are now signed in and can close the page.
13.
14. Verify that your Azure account now displays at the bottom of the Visual Studio Code window.
15. Create a new file and paste in the following playbook text:
- name: Create Azure VM
hosts: localhost
connection: local
tasks:
- name: Create resource group
azure_rm_resourcegroup:
name: myResourceGroup
location: eastus
- name: Create virtual network
azure_rm_virtualnetwork:
resource_group: myResourceGroup
name: myVnet
address_prefixes: "10.0.0.0/16"
- name: Add subnet
azure_rm_subnet:
resource_group: myResourceGroup
name: mySubnet
address_prefix: "10.0.1.0/24"
virtual_network: myVnet
- name: Create public IP address
azure_rm_publicipaddress:
resource_group: myResourceGroup
OHIBPROHIBITED 240 Module 4 Third Party and Open Source Tool integration with Azure
allocation_method: Static
name: myPublicIP
register: output_ip_address
- name: Dump public IP for VM which will be created
debug:
msg: "The public IP is {{ output_ip_address.state.ip_address }}."
- name: Create Network Security Group that allows SSH
ITED
azure_rm_securitygroup:
resource_group: myResourceGroup
name: myNetworkSecurityGroup
rules:
- name: SSH
protocol: Tcp
destination_port_range: 22
USE PRUSE
access: Allow
priority: 1001
direction: Inbound
- name: Create virtual network interface card
azure_rm_networkinterface:
resource_group: myResourceGroup
STUDENT
name: myNIC
virtual_network: myVnet
subnet: mySubnet
public_ip_name: myPublicIP
security_group: myNetworkSecurityGroup
- name: Create VM
azure_rm_virtualmachine:
STUDENT
resource_group: myResourceGroup
name: myVM
vm_size: Standard_DS1_v2
admin_username: azureuser
ssh_password_enabled: true
admin_password: Password0134
network_interfaces: myNIC
ONLY.
image:
offer: CentOS
publisher: OpenLogic
sku: '7.5'
version: latest
ONLY.
4.
5. A notice might appear in the bottom, left side, informing you that the action could incur a small
charge as it will use some storage when the playbook is uploaded to cloud shell. Select Confirm &
.
6.
7. Verify that the Azure Cloud Shell pane now displays in the bottom of Visual Studio Code and is
running the playbook.
MCT USE ONLY. STUDENT USE PROHIBITED 242 Module 4 Third Party and Open Source Tool integration with Azure
8.
9. When the playbook finishes running, open Azure and verify the resource group, resources, and VM
have all been created. If you have time, sign in with the user name and password specified in the
playbook to verify as well.
: If you want to use a public or private key pair to connect to the Linux VM, instead of a user name
and password you could use the following code in the previous Create VM module steps:
admin_username: adminUser
ssh_password_enabled: false
ssh_public_keys:
- path: /home/adminUser/.ssh/authorized_keys
key_data: < insert your ssh public key here... >
Cloud-init with Azure 243
Lesson Overview
This lesson includes the following topics:
●● What is cloud-init
●● Cloud-init components
●● Cloud-init on Azure
●● Configure a Linux VM using cloud-init and Azure Cloud Shell
What is Cloud-init
Cloud-init is a widely used approach to customize a Linux VM as it boots for the first time. You can use
cloud-init to install packages, write files, and configure users and security.
Because cloud-init is called during the initial boot process, there are no additional steps or required
agents to apply your configuration. In addition, as the configuration is performed on initial boot, it
configures the VMs quickly and early.
Cloud-init also works across Linux distributions. For example, you don't need to use apt-get install
or yum install to install a package. Instead, you define a list of packages to install, and cloud-init
automatically uses the native package management tool for the distribution you select.
Cloud-init components
Cloud-init is run on Azure by using a configuration definition file, known as cloud-config. This file is in
the form of a .txt file and uses the .yml file structure.
The .txt cloud-config file is applied using the Azure Command-Line Interface (Azure CLI) command
using the –custom-data parameter, and then specifying the .txt cloud-config file.
For example, you would create a file named cloud-init.txt and place the following configuration
details into it:
#cloud-config
package_upgrade: true
packages:
- httpd
We can then run this configuration file by running the Azure CLI command as follows, specifying the
--custom-data switch and the .txt file name:
az vm create \
--resource-group myResourceGroup \
--name centos74 \
--image OpenLogic:CentOS:7-CI:latest \
--custom-data cloud-init.txt \
--generate-ssh-keys
OHIBPROHIBITED 244 Module 4 Third Party and Open Source Tool integration with Azure
For more information on how to format cloud-config files, see the Cloud Config Data page, which is
part of the clout-init documentation site.
Cloud-init on Azure
If cloud-init is already installed in the Linux image, you need not do anything else to use cloud-init
because it works as soon as it is installed.
ITED
Microsoft is actively working with endorsed Linux distribution partners to have cloud-init enabled images
available in the Azure marketplace. These images will make cloud-init deployments and configurations
work seamlessly with VMs and VM Scale Sets.
The following table outlines the current cloud-init enabled images available on the Azure platform.
With cloud-init, you don't need to convert your existing scripts into a cloud-config file; cloud-init accepts
multiple input types, one of which is a Bash script. If you've been using the Linux Custom Script Azure
Extension to run your scripts, you can migrate them to use cloud-init. However, while Azure extensions
have integrated reporting to alert you if scripts fail, a cloud-init image deployment will not fail if the script
fails.
ONLY.
Windows Azure Linux Agent (WALinuxAgent) is an Azure platform-specific agent that you use to provi-
sion and configure VMs, and manage Azure extensions.To allow existing cloud-init customers to use their
T USE
current cloud-init scripts, Microsoft is enhancing the task of configuring VMs to use cloud-init instead of
the Linux Agent. If you have existing investments in cloud-init scripts for configuring Linux systems, there
are no additional settings required to enable them.
MCT
https://cloudinit.readthedocs.io/en/latest/topics/format.html#cloud-config-data
MC
Cloud-init with Azure 245
If you don't include the Azure CLI switch at provisioning time, WALinuxAgent takes the
minimal VM provisioning parameters required to provision the VM and complete the deployment with
default settings. If you do reference the cloud-init switch, whatever is contained in your
custom data (individual settings or full script) overrides the WALinuxAgent defaults.
●● You require an Azure subscription to perform these steps. If you don't have one you can create one by
following the steps outlined on the webpage.
1. Go to the Azure Cloud Shell at https://shall.azure.com, or launch Azure Cloud Shell from within the
Azure portal by selecting the PowerShell icon in the top, left corner.
2. Authenticate to Azure, and enter your credentials if prompted.
3. On the left side of the Azure portal taskbar, ensure is selected as the shell.
4. Create a new file using the following command:
vi cloud-init.txt
9. Before deploying a VM and using cloud-init to configure it, you first need to create a resource group
in Azure in which to deploy the VM by running the following command:
az group create --name cloud-init-rg1 --location < your nearest datacenter
>
10. After the command completes, verify that the resource group has been created successfully.
https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstu-
dio
MCT USE ONLY. STUDENT USE PROHIBITED 246 Module 4 Third Party and Open Source Tool integration with Azure
11. Next, run the cloud-init configuration file by running the Azure CLI command as in the following code,
specifying the switch and the file name:
az vm create \
--resource-group cloud-init-rg1 \
--name centos74 \
--image OpenLogic:CentOS:7-CI:latest \
--custom-data cloud-init.txt \
--generate-ssh-keys
12. After the configuration file finishes, open the Azure portal and verify the VM has been created.
13. Open the deployed VM, and then select .
14. From the pane, copy the details from the
.
15.
16. Return to the Azure Cloud Shell and sign into the VM with the credentials you obtained. Verify that
you can connect to the VM by running the following commands to verify the machine status:
cloud-init status
18.
19. Now run the following command to view the package installation history:
sudo yum history
20. Verify that the package history displays httpd, as was specified in the cloud-init .txt configuration file.
OHIBPROHIBITED 248 Module 4 Third Party and Open Source Tool integration with Azure
Lesson Overview
This lesson includes the following topics:
●● What is Terraform
ITED
●● Terraform components
●● Terraform on Azure
●● Installing Terraform
●● Terraform config file structure
●● Run Terraform in Azure Cloud Shell
USE PRUSE
What is Terraform
HashiCorp Terraform is an open-source tool that allows you to provision, manage, and version cloud
infrastructure. It codifies infrastructure in configuration files that describes the topology of cloud resourc-
STUDENT
Terraform's command-line interface (CLI) provides a simple mechanism to deploy and version the
configuration files to Azure or any other supported cloud service. The CLI also allows you to validate and
preview infrastructure changes before you deploy them.
Terraform also supports multi-cloud scenarios. This means it enables developers to use the same tools
and configuration files to manage infrastructure on multiple cloud providers.
You can run Terraform interactively from the CLI with individual commands, or non-interactively as part of
ONLY.
Terraform components
ONLY.
●● Configuration files. Text-based configuration files allow you to define infrastructure and application
configuration, and end in the or extension. The files can be in either of the following two
formats:
T USE
●● Terraform. The Terraform format is more user-readable, supports comments, and is the generally
recommended format for most Terraform files. Terraform files ends in
MCT
https://www.terraform.io/
MC
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform with Azure 249
●● JSON. The JSON format is meant more for machines to create, modify, and update. However, it can
also be used by Terraform operators if you prefer. JSON files end in
The order of items (such as variables resources) within the configuration doesn't matter; Terraform
configurations are declarative, so references to other resources and variables don't depend on the
order in which they're defined.
●● Terraform CLI. Terraform CLI is a command-line interface from which you run configurations. You can
run command such as and , along with many others. A CLI configu-
ration file that configures per-user setting for the CLI is also available. However, this is separate from
the CLI infrastructure configuration. In Windows operating system environments, the configuration file
is named , and is stored in the relevant user's %APPDATA% directory. On Linux systems,
the file is named (note the leading period), and is stored in the home directory of the
relevant user.
●● Modules. Modules in Terraform are self-contained packages of Terraform configurations that are
managed as a group. You use modules to create reusable components in Terraform and for basic code
organization. A list of available modules for Azure is available on the
page.
●● Provider. The provider is responsible for understanding API interactions and exposing resources.
●● Overrides. Overrides are a way to create configuration files that are loaded last and merged into(rath-
er than appended to) your configuration. You can create overrides to modify Terraform behavior
without having to edit the Terraform configuration. They can also be used as temporary modifications
that you can make to Terraform configurations without having to modify the configuration itself.
●● Resources. Resources are sections of a configuration file that define components of your infrastructure,
such as VMs, network resources, containers, dependencies, or DNS records. The resource block
creates a resource of the given TYPE (first parameter) and NAME (second parameter). However, the
combination of the type and name must be unique. Within the braces is the resource's configuration.
●● Execution plan. You can issue a command in the Terraform CLI to generate an execution plan. The
execution plan shows what Terraform will do when a configuration is applied. This enables you to
verify changes and flag of potential issues. The command for the execution plan is .
●● Resource graph. Using a resource graph, you can build a dependency graph of all resources, and can
then create and modify resources in parallel. This helps you increase efficiency when provisioning and
configuring resources.
Terraform on Azure
You download Terraform for use in Azure via: Azure Marketplace, Terraform Marketplace, or Azure VMs.
Azure Marketplace offers a fully-configured Linux image containing Terraform with the following charac-
teristics:
●● The deployment template will install Terraform on a Linux (Ubuntu 16.04 LTS) VM along with tools
configured to work with Azure. Items downloaded include:
●● Terraform (latest)
●● Azure CLI 2.0
https://registry.terraform.io/browse?provider=azurerm
MCT USE ONLY. STUDENT USE PROHIBITED 250 Module 4 Third Party and Open Source Tool integration with Azure
The Terraform Marketplace image makes it easy to get started using Terraform on Azure, without having
to install and configure Terraform manually. There are no software charges for this Terraform VM image.
You pay only the Azure hardware usage fees that are assessed based on the size of the VM that's provi-
sioned.
You can also deploy a Linux or Windows VM in Azure VM's IaaS service, install Terraform and the relevant
components, and then use that image.
Installing Terraform
To get started, you must install Terraform on the machine from which you are running the Terraform
commands.
Terraform can be installed on Windows, Linux or macOS environments. Go to the
page, and choose the appropriate download package for your environment.
https://www.terraform.io/downloads.html
Terraform with Azure 251
4. Verify the installation by running the command . Verify that the Terraform help output
displays.
OHIBPROHIBITED
USE PRUSE
STUDENT
STUDENT ITED 252 Module 4 Third Party and Open Source Tool integration with Azure
Terraform supports a number of different methods for authenticating to Azure. You can use:
●● The Azure CLI
●● A Managed Service Identity (MSI)
●● A service principal and a client certificate
●● A service principal and a client secret
ONLY.
When running Terraform as part of a continuous integration pipeline, you can use either an Azure service
principal or MSI to authenticate.
To configure Terraform to use your Azure Active Directory (Azure AD) service principal, set the following
environment variables:
ONLY.
●● ARM_SUBSCRIPTION_ID
●● ARM_CLIENT_ID
USE
●● ARM_CLIENT_SECRET
●● ARM_TENANT_ID
●● ARM_ENVIRONMENT
T USE
These variables are then used by the Azure Terraform modules. You can also set the environment if you
are working with an Azure cloud other than an Azure public cloud.
MCT
#!/bin/sh
echo "Setting environment variables for Terraform"
export ARM_SUBSCRIPTION_ID=your_subscription_id
export ARM_CLIENT_ID=your_appId
export ARM_CLIENT_SECRET=your_password
export ARM_TENANT_ID=your_tenant_id
: After you install Terraform, before you can apply config .tf files you must run the following com-
mand to initialize Terraform for the installed instance:
Terraform init
tags {
MCT USE ONLY. STUDENT USE PROHIBITED 254 Module 4 Third Party and Open Source Tool integration with Azure
tags {
environment = "Terraform Demo"
}
}
# Create subnet
resource "azurerm_subnet" "myterraformsubnet" {
name = "mySubnet"
resource_group_name = "${azurerm_resource_group.myterraformgroup.
name}"
virtual_network_name = "${azurerm_virtual_network.myterraformnetwork.
name}"
address_prefix = "10.0.1.0/24"
}
tags {
environment = "Terraform Demo"
}
}
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform with Azure 255
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
tags {
environment = "Terraform Demo"
}
}
ip_configuration {
name = "myNicConfiguration"
subnet_id = "${azurerm_subnet.myterraformsub-
net.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.myterraform-
publicip.id}"
}
tags {
environment = "Terraform Demo"
}
}
byte_length = 8
}
account_replication_type = "LRS"
tags {
environment = "Terraform Demo"
}
}
storage_os_disk {
name = "myOsDisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04.0-LTS"
version = "latest"
}
os_profile {
computer_name = "myvm"
admin_username = "azureuser"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/azureuser/.ssh/authorized_keys"
key_data = "ssh-rsa AAAAB3Nz{snip}hwhqT9h"
}
}
boot_diagnostics {
enabled = "true"
storage_uri = "${azurerm_storage_account.mystorageaccount.primary_
blob_endpoint}"
}
tags {
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform with Azure 257
Here's an example of Azure Cloud Shell with Bash shell, running Terraform.
MCT USE ONLY. STUDENT USE PROHIBITED 258 Module 4 Third Party and Open Source Tool integration with Azure
You can also use the Azure Cloud Shell editor to view, open, and edit your .tf files. To open the editor,
select the braces in the taskbar at the top of Azure Cloud Shell window.
Terraform with Azure 259
●● You require an Azure subscription to perform these steps. If you don't have one you can create one by
following the steps outlined on the webpage.
The following steps outline how to create a resource group in Azure using Terraform in Azure Cloud Shell
with bash:
1. Open the Azure Cloud Shell at https://shell.azure.com, or launch Azure Cloud Shell from within the
Azure portal by selecting the Azure PowerShell icon.
2. Authenticate to Azure by entering your credentials, if prompted.
3. In the taskbar, ensure is selected.
4. Create a new file using the following code:
vi terraform-createrg.tf
https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstu-
dio
OHIBPROHIBITED 260 Module 4 Third Party and Open Source Tool integration with Azure
provider "azurerm" {
}
resource "azurerm_resource_group" "rg" {
name = "testResourceGroup"
location = "westus"
}
ITED
terraform init
10. You should receive a message saying Terraform was successfully initiated.
ONLY.
ONLY. STUDENT
STUDENT
11.
USE
12. Run the configuration .tf file by running the following command:
terraform apply
T USE
13. You should receive a prompt saying a plan has been generated. Details of the changes should be
listed, followed by a prompt asking if you wish to apply the changes or cancel these actions.
MCT
MC
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform with Azure 261
14.
15. Enter a value of and select Enter. The command should run successfully, with output similar to the
following screenshot.
16.
17. Open Azure portal and verify the resource group is now present in the portal.
●● You require an Azure subscription to perform these steps. If you don't have one you can create one by
following the steps outlined on the webpage.
1. If Visual Studio Code is not already installed, you will need to install it. You can download it from the
website, and can install it on Windows, Linux, or macOS.
2. In Visual Studio Code, select > > .
3. Search for and install the extension .
4.
5. Search for and install the extension . Ensure that you select the extension authored by
Microsoft, as there are a few available by other authors.
https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstu-
dio
https://code.visualstudio.com/
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform with Azure 263
6.
7. You can view more details of this extension at the Visual Studio Marketplace on the
page.
8. In Visual Studio Code, open the command palette by selecting > . You can
also access the command palette by selecting the (cog) icon on the bottom, left side of the
window, and then selecting .
9.
10. In the Command Palette search field, type , and from the results, select Azure: Sign In.
11.
12. When a browser launches and prompts you to sign in to Azure, select your Azure account. The
message You are signed in now and can close this page., should display in the browser.
https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureterraform
USEPROHIBITED
PROHIBITE
264 Module 4 Third Party and Open Source Tool integration with Azure
13.
14. Verify that your Azure account now displays at the bottom of the Visual Studio Code window.
15. Create a new file and paste it in the following config file:
STUDENTUSE
tags {
environment = "Terraform Demo"
}
}
tags {
environment = "Terraform Demo"
USE ONLY.
}
}
# Create subnet
resource "azurerm_subnet" "myterraformsubnet" {
name = "mySubnet"
resource_group_name = "${azurerm_resource_group.myterraformgroup.
name}"
virtual_network_name = "${azurerm_virtual_network.myterraformnetwork.
MCT USE
name}"
address_prefix = "10.0.1.0/24"
}
location = "eastus"
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform with Azure 265
resource_group_name = "${azurerm_resource_group.myterraform-
group.name}"
public_ip_address_allocation = "dynamic"
tags {
environment = "Terraform Demo"
}
}
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
tags {
environment = "Terraform Demo"
}
}
ip_configuration {
name = "myNicConfiguration"
subnet_id = "${azurerm_subnet.myterraformsub-
net.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.myterraform-
publicip.id}"
}
tags {
MCT USE ONLY. STUDENT USE PROHIBITED 266 Module 4 Third Party and Open Source Tool integration with Azure
byte_length = 8
}
tags {
environment = "Terraform Demo"
}
}
storage_os_disk {
name = "myOsDisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04.0-LTS"
version = "latest"
}
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform with Azure 267
os_profile {
computer_name = "myvm"
admin_username = "azureuser"
admin_password = "Password0134!"
}
os_profile_linux_config {
disable_password_authentication = false
}
}
boot_diagnostics {
enabled = "true"
storage_uri = "${azurerm_storage_account.mystorageaccount.primary_
blob_endpoint}"
}
tags {
environment = "Terraform Demo"
}
}
18.
19. If Azure cloud shell is not open in Visual Studio Code, a message might appear in the bottom, left
corner asking you if you want to open the cloud shell. Accept, and select .
20. Wait for the Azure Cloud Shell pane to appear in the bottom of Visual Studio Code window, and start
running the .tf file. When you are prompted to apply the plan or cancel, type , and then press
.
MCT USE ONLY. STUDENT USE PROHIBITED 268 Module 4 Third Party and Open Source Tool integration with Azure
21.
22. After the command completes successfully, review the list of resources created.
1. Open Azure and verify the resource group, resources, and VM has been created. If you have time, sign
in with the user name and password specified in the .tf config file to verify.
Terraform with Azure 269
: If you wanted to use a public or private key pair to connect to the Linux VM instead of a user name
and password, you could use the os_profile_linux_config module, set the
key value to and include the ssh key details, as in the following code.
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/azureuser/.ssh/authorized_keys"
key_data = "ssh-rsa AAAAB3Nz{snip}hwhqT9h"
}
}
You'd also need to remove the password value in the os_profile module that present in the example
above.
: You could also embed the Azure authentication within the script. In that case, you would not need
to install the Azure account extension, as in the following example:
provider "azurerm" {
subscription_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
client_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
client_secret = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
tenant_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
OHIBPROHIBITED 270 Module 4 Third Party and Open Source Tool integration with Azure
Labs Overview
ITED
1. If you already have a Microsoft account that has not already been used to sign up for a free Azure trial
subscription, you’re ready to get started. If not, don’t worry, just
2. After you’ve created a Microsoft account, create your . You’ll need to
sign-in with your Microsoft account if you’re not already signed in. Then you’ll need to:
●● Enter your cellphone number and have Microsoft send you a text message to verify your identity.
STUDENT
Like many other cloud infrastructure platforms today, Azure is being continuously developing updates to
their services and components. If you've had your own subscriptions for any length of time, you are
already aware that changes to services happen much more rapidly than with more traditional application
deployment models.
Every effort will be made to update course content where there are major changes to product functionali-
ONLY.
ty. However, there will be occasions where course content does not exactly match the latest version of
the product. In most cases, you should still be able to understand the tasks and complete the course. The
general guidance from the Azure documentation teams is to check the documentation frequently to see
what upcoming notifications have been posted or where documentation has been updated to reflect the
latest changes.
ONLY.
We encourage you to consult updates as a starting point for the latest information about
updates. From there, you can avail yourself of blogs and other resources that are provided in order to
help you stay current in a cloud-enabled world.
USE
T USE
https://aka.ms/edx-devops200.4x-msa
MCT
https://aka.ms/edx-devops200.4x-az2
https://azure.microsoft.com/en-us/free/free-account-faq/
https://azure.microsoft.com/en-us/updates/
MC
Lab 271
AZ-400T05-M04-Lab Tasks
●●
●●
●●
http://microsoft.github.io/PartsUnlimitedMRP/iac/200.2x-IaC-DeployappwithChefonAzure.html
http://microsoft.github.io/PartsUnlimitedMRP/iac/200.2x-IaC-DeployappwithPuppetonAzure.html
http://microsoft.github.io/PartsUnlimitedMRP/iac/200.2x-IaC-AnsiblewithAzure.html
OHIBPROHIBITED 272 Module 4 Third Party and Open Source Tool integration with Azure
Chef Client
Chef Workstation
STUDENT
Which of the following are open-source products that are integrated into the Chef Automate image availa-
ble from Azure Marketplace?
Habitat
Facts
Console Services
STUDENT
InSpec
Which of the following are core components of the Puppet automation platform?
(chose all that apply)
ONLY.
Master
Agent
Facts
Habitat
ONLY.
Complete the following sentence. The main elements of a Puppet Program (PP) Manifest file are Class,
USE
Resource and...?
Module
T USE
Habitat
InSpec
MCT
Cookbooks
MC
Module Review Questions 273
Which of the following platforms use Agents to communicate with target machines?
(choose all that apply)
Puppet
Chef
Ansible
True or false: The Control Machine in Ansible must have Python installed?
True
False
Which of the following statements describes a common use for the cloud-init package?
cloud-init is used to apply custom configurations to a Linux VM, as it boots for the first time.
cloud-init is used to add support for multiple key types and algorithms.
cloud-init is used to manage access to Hardware Security Modules (HSM).
cloud-init is used to manage keys associated with an Azure Storage account.
Which of the following statements about the cloud-init package are correct?
The --custom-data parameter passes the name of the configuration file (.txt).
Configuration files (.txt) are encoded in base64.
The YML syntax is used within the configuration file (.txt).
cloud-init works across Linux distributions.
True or false: Terraform ONLY supports configuration files with the file extension .tf?
True
False
OHIBPROHIBITED 274 Module 4 Third Party and Open Source Tool integration with Azure
Which of the following core Terraform components can modify Terraform behavior, without having to edit
the Terraform configuration?
Configuration files
Overrides
ITED
Execution plan
Resource graph
T USE
MCT
MC ONLY.
USE STUDENT
ONLY. USE PRUSE
STUDENT
Module Review Questions 275
Which of the following are open-source products that are integrated into the Chef Automate image
available from Azure Marketplace?
■■ Habitat
Facts
Console Services
■■ InSpec
Explanation
The correct answers are Habitat and InSpec.
Facts and Console Services are incorrect answers.
Facts are metadata used to determine the state of resources managed by the Puppet automation tool.
Console Services is a web-based user interface for managing your system with the Puppet automation tool.
Habitat and InSpec are two open-source products that are integrated into the Chef Automate image
available from Azure Marketplace. Habitat makes the application and its automation the unit of deploy-
ment, by allowing you to create platform-independent build artifacts called 'habitats' for your applications.
InSpec allows you to define desired states for your applications and infrastructure. InSpec can conduct audits
to detect violations against your desired state definitions, and generate reports from its audit results.
Which of the following are core components of the Puppet automation platform?
STUDENT USE PROHIBITED 276 Module 4 Third Party and Open Source Tool integration with Azure
Complete the following sentence. The main elements of a Puppet Program (PP) Manifest file are Class,
Resource and...?
■■ Module
Habitat
InSpec
Cookbooks
Explanation
Module is the correct answer.
All other answers are incorrect answers.
Habitat, InSpec and Cookbooks are incorrect because they relate to the Chef automation platform.
The main elements of a Puppet Program (PP) Manifest file are Class, Resource and Module. Classes define
related resources according to their classification, to be reused when composing other workflows. Resources
are single elements of your configuration which you can specify parameters for. Modules are collection of all
the classes, resources and other elements in a single entity.
PR ONLY.
Which of the following platforms use Agents to communicate with target machines?
(choose all that apply)
■■ Puppet
ED
■■ Chef
NT USE OHIBT
Ansible
USE . Y ONL STUDEUSE
Explanation
The correct answers are: Puppet and Chef.
Ansible is an incorrect answer.
Ansible is agentless because you do not need to install an Agent on each of the target machines it manages.
Ansible uses the Secure Shell (SSH) protocol to communicate with target machines. You choose when to
conduct compliance checks and perform corrective actions, instead of using Agents and a Master to perform
MCT
Puppet and Chef use Agents to communicate with target machines. With Puppet and Chef, you install an
Agent on each target machine managed by the platform. Agents typically run as a background service and
facilitate communication with a Master, which runs on a server. The Master uses information provided by
Agents to conduct compliance checks and perform corrective actions automatically.
True or false: The Control Machine in Ansible must have Python installed?
■■ True
False
Explanation
True is the correct answer.
False in an incorrect answer.
A Control Machine in Ansible must have Python installed. Control Machine is one of the core components of
Ansible. Control Machine is for running configurations. The other core components of Ansible are Managed
Nodes, Playbooks, Modules, Inventory, Roles, Facts, and Plug-ins. Managed Nodes are resources managed
by Ansible. Playbooks are ordered lists of Ansible tasks. Modules are small blocks of code within a Playbook
that perform specific tasks. Inventory is list of managed nodes. Roles allow for the automatic and sequenced
loading of variables, files, tasks and handlers. Facts are data points about the remote system which Ansible
is managing. Plug-ins supplement Ansible's core functionality.
Which of the following statements describes a common use for the cloud-init package?
■■ cloud-init is used to apply custom configurations to a Linux VM, as it boots for the first time.
cloud-init is used to add support for multiple key types and algorithms.
cloud-init is used to manage access to Hardware Security Modules (HSM).
cloud-init is used to manage keys associated with an Azure Storage account.
Explanation
The correct answer is: cloud-init is used to apply custom configurations to a Linux VM, as it boots for the
first time.
All other answers are incorrect answers because they describe uses for Azure Key Vault.
Cloud-init is a package that is often used to add custom configurations to a Linux VM, as it boots for the
first time. Cloud-init works across Linux distributions. In Azure, you can add custom configurations to a
Linux VM with cloud-init and a configuration file (.txt). Any provisioning configuration information con-
tained in the specified configuration file (.txt) is applied to the new VM, when the VM is created.
Which of the following statements about the cloud-init package are correct?
■■ The --custom-data parameter passes the name of the configuration file (.txt).
■■ Configuration files (.txt) are encoded in base64.
■■ The YML syntax is used within the configuration file (.txt).
■■ cloud-init works across Linux distributions.
Explanation
All of the answers are correct answers.
In Azure, you can add custom configurations to a Linux VM with cloud-init by appending the --custom-data
parameter, and passing the name of a configuration file (.txt), to the az vm create command. The --cus-
tom-data parameter passes the name of the configuration file (.txt) as an argument to cloud-init. Then,
OHIBPROHIBITED 278 Module 4 Third Party and Open Source Tool integration with Azure
cloud-init applies Base64 encoding to the contents of the configuration file (.txt), and sends it along with any
provisioning configuration information that is contained within the configuration file (.txt). Any provisioning
configuration information contained in the specified configuration file (.txt) is applied to the new VM, when
the VM is created. The YML syntax is used within the configuration file (.txt) to define any provisioning
configuration information that needs to be applied to the VM.
True or false: Terraform ONLY supports configuration files with the file extension .tf?
ITED
True
■■ False
Explanation
False is the correct answer.
True is an incorrect answer because Terraform supports configuration files with the file extensions .tf and .tf.
USE PRUSE
json.
Terraform configuration files are text based configuration files that allow you to define infrastructure and
application configurations. Terraform uses the file extension .tf for Terraform format configuration files, and
the file extension .tf.json for Terraform JSON format configuration files. Terraform supports configuration
files in either .tf or .tf.json format. The Terraform .tf format is more human-readable, supports comments,
and is the generally recommended format for most Terraform files. The JSON format .tf.json is meant for
STUDENT
use by machines, but you can write your configuration files in JSON format if you prefer.
Which of the following core Terraform components can modify Terraform behavior, without having to
edit the Terraform configuration?
Configuration files
STUDENT
■■ Overrides
Execution plan
Resource graph
Explanation
Overrides is the correct answer.
All other answers are incorrect answers.
ONLY.
Configuration files, in .tf or .tf.json format, allow you to define your infrastructure and application configura-
tions with Terraform.
Execution plan defines what Terraform will do when a configuration is applied.
Resource graph builds a dependency graph of all Terraform managed resources.
Overrides modify Terraform behavior without having to edit the Terraform configuration. Overrides can also
ONLY.
be used to apply temporary modifications to Terraform configurations without having to modify the
configuration itself.
MCT
MC USE
T USE
Module 5 Compliance and Security
Lesson Overview
This lesson includes the following topics:
●● What is rugged DevOps?
●● Rugged DevOps pipeline
●● Software Composition Analysis (SCA)
●● WhiteSource integration with Azure DevOps pipeline
●● Fortify integration with Microsoft Azure DevOps pipeline
●● CheckMarx integration with Azure DevOps
●● Veracode integration with Azure DevOp
●● How to integrate SCA checks into pipelines
●● DevOps and pipeline security
●● Secure DevOps Kit for Azure
https://www.microsoft.com/en-us/security/operations/security-intelligence-report
MCT USE ONLY. STUDENT USE PROHIBITED 280 Module 5 Compliance and Security
and also
●● 4% of SaaS storage apps
●● 3% of SaaS collaboration apps
support all HTTP headers session protection.
Rugged DevOpS brings together the notions of DevOps and Security. DevOps is about working faster.
Security is about emphasizing thoroughness, which is typically done at the end of the cycle, resulting in
potentially generating unplanned work right at the end of the pipeline. Rugged DevOps is a set of
practices designed integrate DevOps and security, and to meet the goals of both more effectively.
The goal of a Rugged DevOps pipeline is to allow development teams to work fast without breaking their
project by introducing unwanted vulnerabilities.
: Rugged DevOps is also sometimes referred to as DevOpsSec. You might encounter both terms, but
they both are intending to mean the same concept.
Security has typically been on a slower cycle and involved traditional security methodologies such as:
●● Access control
●● Environment hardening
●● Perimeter protection
In the context of Rugged DevOps, security includes all of these elements and more. With Rugged Dev-
Ops, security is more about securing the pipeline. It is about determining where you can add security to
the elements that plug into your build and release pipeline. For example, it's about how and where you
can add security to you automation practices, production environments, and other pipeline elements
while attempt to gain the speed of DevOps.
Rugged DevOps includes bigger questions such as:
●● Is my pipeline consuming third-party components, and if so, are they secure?
●● Are there known vulnerabilities within any of the third-party software we use?
MCT USE ONLY. STUDENT USE PROHIBITED
Security and Compliance in the pipeline 281
There are two important areas in the pipeline that are part of Rugged DevOps and not other DevOps
pipelines , these are:
●● , and the approval process associated with it. In the workflow diagram there
are additional steps which account for how software packages are added to the pipeline, and the
approval process they need to go through before they are used. This is very early in the pipeline to try
identify any issues early in the cycle.
●● . There is an additional step for scanning the source code. This is to perform a security
scan and verify certain security vulnerabilities are not present in our application source code. This
occurs after the app is built and before release and pre-release testing, again to try identify security
vulnerabilities as early as possible.
OHIBPROHIBITED 282 Module 5 Compliance and Security
We will address these areas in the remainder of this lesson, the problems they represent, and how
solutions can be achieved.
Just as teams uses version control as a single source of truth for source code, Rugged DevOps relies on a
package manager as the unique source of binary components. By using binary package management, a
development team can create a local cache of approved components, and make this a trusted feed for
the continuous integration (CI) pipeline.
USE PRUSE
In Azure DevOps, Azure Artifacts is an integral part of the component workflow, which you can use to
organize and share access to your packages. It allows you to:
●● Keep your artifacts organized. Share code easily by storing Apache Maven, npm, and NuGet packages
together. You can store packages using Universal Packages, eliminating the need to store binaries in
Git.
STUDENT
●● Protect your packages. Keep every public source package you use, including packages from npmjs
and nuget.org, safe in your feed where only you can delete it, and where it’s backed by the enter-
prise-grade Azure SLA.
●● Integrate seamless package handling into your CI/CD pipeline. Easily access all your artifacts in builds
and releases. Artifacts integrate natively with the Azure Pipelines CI/CD tool.
STUDENT
Maven, npm, and NuGet packages are supported from public and private sources with teams of any size.
Azure Artifact comes with Azure DevOps, but the extension is also available from the
.
USE
T USE
MCT
https://docs.microsoft.com/en-us/azure/devops/artifacts/overview?view=vsts
https://marketplace.visualstudio.com/items?itemName=ms.feed
MC
Security and Compliance in the pipeline 283
: After you publish a particular version of a package to a feed, that version number is permanently
reserved. You cannot upload a newer revision package with that same version number, or delete that
version and upload a new package with the same version number. The published version is immutable.
Developers today are more productive than ever as a result of the wide availability of reusable open-
source software (OSS) components. This practical approach to reuse includes runtimes, which are availa-
ble on Windows and Linux operating systems, such as Microsoft .NET Core and Node.js.
At the same time, OSS reuse comes with the risk of the reused dependencies having security vulnerabili-
ties. As a result, many users find security vulnerabilities in their applications due to the Node.js package
versions they consume.
OSS offers a new concept, sometimes called Software Composition Analysis (SCA), which is depicted in the
following image.
PR STUDENT USE PROHIBITED 284 Module 5 Compliance and Security
When consuming an OSS component, whether you're creating or consuming dependencies, you'll
typically want to follow these high-level steps:
1. Start with the latest correct version to avoid any old vulnerabilities or license misuse.
2. Validate that the OSS components are in fact the correct binaries for your version. In the release
pipeline, validate binaries to ensure they’re correct and to keep a traceable bill of materials.
3. In the event of a vulnerability, be notified immediately, and be able to correct and redeploy the
IBITED
component automatically to prevent a security vulnerability or license misuse from reused software.
is an important site for addressing Rugged DevOps issues. From here you
E ONLY. STUONLY.
can integrate specialist security products into your Azure DevOps pipeline. Having a full suite of exten-
sions that allow seamless integration into Azure DevOps pipelines is invaluable.
is one such example of an extension available on the Azure DevOps Marketplace. Using
WhiteSource, you can integrate extensions with your CI/CD pipeline to address Rugged DevOps securi-
ty-related issues. For a team consuming external packages, the WhiteSource extension specifically
MCTUSUSE
addresses the questions of open source security, quality, and license compliance. Because most breaches
today target known vulnerabilities in known components, this is essential hygiene for consuming open
source products.
https://marketplace.visualstudio.com/
MCT
https://marketplace.visualstudio.com/items?itemName=whitesource.whitesource
MCT USE ONLY. STUDENT USE OH
Security and Compliance in the pipeline 285
PR IBITED
Receive alerts on open-source security vulnerabilities
When a new security vulnerability is discovered, WhiteSource automatically generates an alert and
provides targeted remediation guidance. This can include links to patches, fixes, relevant source files,
even recommendations to change system configuration to prevent exploitation.
For searching online repositories such as GitHub and Maven Central, WhiteSource also offers an innova-
tive browser extension. Even before choosing a new component, a developer can see its security vulnera-
bilities, quality, and license issues, and whether it fits their company’s policies.
Micro Focus Fortify Static Code Analyzer (Fortify SCA) identifies root causes of software security vulnera-
bilities. It then delivers accurate, risk-ranked results with line-of-code remediation guidance.
https://marketplace.visualstudio.com/items?itemName=fortifyvsts.hpe-security-fortify-vsts
Security and Compliance in the pipeline 287
Fortify on Demand delivers application SaaS. It automatically tasks submit static and dynamic scan
requests to the application SaaS platform. Static assessments, projects upload to Fortify on Demand, and
for dynamic assessments, it uses the application’s preconfigured URL.
https://marketplace.visualstudio.com/items?itemName=checkmarx.cxsast
MCT USE ONLY. STUDENT USE PROHIBITED 288 Module 5 Compliance and Security
the developer in mind. No time is wasted trying to understand the required action items to mitigate
detected security or compliance risks.
Pull requests (PRs) are the way DevOps teams submit changes. Prior to a PR, a developer needs the ability
to see the effect of code changes to avoid introducing new issues. In a devops process, each PR is
typically small, and merges are continual enabling the master branch of code to stay fresh. Ideally, a
developer can check for security issues prior to a PR.
Azure Marketplace extensions that facilitate integrating scans during PRs include:
●● . Facilitates validating dependencies with its binary fingerprinting.
●● . Provides an incremental scan of changes.
●● . Has the concept of a developer sandbox.
●● . An auditing tool for open source code to help identify, fix, and manage
compliance.
These extensions allow a developer to experiment with changes prior to submitting them.
https://www.whitesourcesoftware.com/
https://www.checkmarx.com/
https://www.veracode.com/
https://www.blackducksoftware.com/
Security and Compliance in the pipeline 289
https://marketplace.visualstudio.com/items?itemName=Veracode.veracode-vsts-build-extension
OHIBPROHIBITED
USE PRUSE ITED 290 Module 5 Compliance and Security
●● Integrate application security into the development tools you already use. From within Azure DevOps
and Microsoft Team Foundation Server (TFS) you can automatically scan code using the Veracode
Application Security Platform to find security vulnerabilities, import any security findings that violate
STUDENT
your security policy as work items, and even optionally stop the build if serious security issues are
discovered.
●● Don't stop for false alarms: Because Veracode gives you accurate results and prioritizes them based
on severity, you won’t need to waste resources responding to hundreds of false positives. Microsoft
has assessed over 2 trillion lines of code in 15 languages and over 70 frameworks, and the process
improves with every assessment as a result of the rapid update cycles and continuous improvement
STUDENT
processes. And, if something does get through, you can mitigate it using the easy Veracode workflow.
●● Align your application security practices with your development practices: Do you have a large or
distributed development team? Do you have too many revision control branches? You can integrate
your Azure DevOps workflows with the Veracode Developer Sandbox, which supports multiple
development branches, feature teams, and other parallel development practices.
●● Don't just find vulnerabilities, fix them: Veracode gives you remediation guidance with each finding
and the data path that an attacker would use to reach the application's weak point. Veracode also
ONLY.
highlights the most common sources of vulnerabilities to help you prioritize remediation. In addition,
when vulnerability reports don’t provide enough clarity, you can set up one-on-one developer
consultations with Microsoft experts who have backgrounds in both security and software develop-
ment. Security issue's that are found by Vercode, which could prevent you from releasing, show up
automatically in your teams' list of work items, and are automatically updated and closed after you
scan your fixed code.
ONLY.
●● Proven onboarding process allows for scanning on day one. Want to get started quickly? The cloud-
based Veracode Application Security Platform is designed to get you going quickly and be easy to use
so that you can get started in minutes. Veracode's services and support team can make sure that you
USE
http://aka.ms/jea
https://www.owasp.org
https://azure.microsoft.com/en-us/services/security-center/
https://github.com/azsk/DevOpsKit-docs
MCT USE ONLY. STUDENT USE PROHIBITED 292 Module 5 Compliance and Security
Lesson Overview
This lesson includes the following topics:
●● Azure Security Center
●● Azure Security Center usage scenarios
●● Azure Policy
●● Policies
●● Initiatives
●● Azure Key Vault
●● RBAC
●● Locks
●● Subscription governance
●● Azure Blueprints
●● Azure Advanced Threat Protection
Azure Security Center supports both Windows and Linux operating systems. It can also provide security
to features in both IaaS and PaaS scenarios.
Azure Security Center is available in two tiers:
ITED
●● Free. Available as part of your Azure subscription, this tier is limited to assessments and Azure
resources' recommendations only.
●● Standard. This tier provides a full suite of security-related services including continuous monitoring,
threat detection, JIT access control for ports, and more.
To access the full suite of Azure Security Center services you will need to upgrade to a Standard tier
USE PRUSE
subscription. You can access the 60-day free trial from within the Azure Security Center dashboard in the
Azure portal.
STUDENT
STUDENT
ONLY.
After the 60-day trial period is over, Azure Security Center is $15 per node per month. To upgrade a
subscription from the Free trial to the Standard tier, you must be assigned the role of Subscription Owner,
Subscription Contributor, or Security Admin.
ONLY.
You can integrate Security Center into your workflows and use it in many ways. Here are two examples.
1. Use Security Center for an incident response.
T USE
MCT
https://www.cisecurity.org/cis-benchmarks/
https://azure.microsoft.com/en-us/services/security-center/
MC
MCT USE ONLY. STUDENT USE PROHIBITED
Azure security and compliance tools and services 295
2. Many organizations learn how to respond to security incidents only after suffering an attack. To
reduce costs and damage, it’s important to have an incident response plan in place before an attack
occurs. You can use Azure Security Center in different stages of an incident response.
3.
4. You can use Security Center during the detect, assess, and diagnose stages. Here are examples of how
Security Center can be useful during the three initial incident response stages:
●● Detect. Review the first indication of an event investigation.
Example: Use the Security Center dashboard to review the initial verification that a high-priority
security alert was raised.
●● Assess. Perform the initial assessment to obtain more information about the suspicious activity.
Example: Obtain more information about the security alert.
●● Diagnose. Conduct a technical investigation and identify containment, mitigation, and workaround
strategies.
Example: Follow the remediation steps described by Security Center in that particular security alert.
5. Use Security Center recommendations to enhance security.
6. You can reduce the chances of a significant security event by configuring a security policy, and then
implementing the recommendations provided by Azure Security Center.
7. A security policy defines the set of controls that are recommended for resources within that specified
subscription or resource group. In Security Center, you define policies according to your company's
security requirements.
8. Security Center analyzes the security state of your Azure resources. When Security Center identifies
potential security vulnerabilities, it creates recommendations based on the controls set in the security
policy. The recommendations guide you through the process of configuring the needed security
controls.
9. For example, if you have workloads that do not require the Azure SQL Database Transparent Data
Encryption (TDE) policy, turn off the policy at the subscription level and enable it only in the resources
groups where SQL Database TDE is required.
: You can read more about Azure Security Center at . More implementation
and scenario details are also available in the .
Azure Policy
Azure Policy is a service in Azure that you use to create, assign, and, manage policies. These policies
enforce different rules and effects over your resources, which ensures they stay compliant with your
corporate standards and service-level agreements (SLAs).
https://azure.microsoft.com/en-us/services/security-center/
https://docs.microsoft.com/en-us/azure/security-center/security-center-planning-and-operations-guide
MCT USE ONLY. STUDENT USE PROHIBITED 296 Module 5 Compliance and Security
Azure Policy provides enforcement by using policies and initiatives. It runs evaluations of your resources
and scans for those not compliant with the policies you have created. For example, you can have a policy
to allow only a certain stock keeping unit (SKU) size of VMs in your environment. After you implement
this policy, it will evaluate resources when you create new SKUs or update existing ones. It will also
evaluate your existing resources and configurations and automatically remediate those that are deemed
non-compliant, thus ensuring the integrity of the state of the resources.
Azure Policy comes with a number of built-in policy and initiative definitions that you can use. The
definitions fall under categories such as Storage, Networking, Compute, Security Center, and Monitoring.
Azure Policy can also integrate with Azure DevOps by applying any continuous integration and delivery
pipeline policies that apply to the pre-deployment and post-deployment of your applications.
An example of an Azure policy that you can integrate with your DevOps pipeline is the Check Gate task.
This provides security and compliance assessment with Azure policies on resources that belong to the
defined resource group and Azure subscription. This is available as a release pipeline Deploy task.
You can read more about these subjects at:
●●
●●
Policies
The journey of creating and implementing a policy in Azure Policy begins with creating a policy definition.
Every policy definition has conditions under which it is enforced, and an accompanying effect that takes
place if the conditions are met.
The process of applying a policy to your resources consist of the following steps:
1. Create a policy definition.
2. Assign a definition to a scope of resources.
3. View policy evaluation results.
Policy definition
A policy definition expresses what to evaluate and what action to take. For example, you could prevent
VMs from deploying if they are exposed to a public IP address. You also could prevent a particular hard
disk from being used when deploying VMs to control costs. Policies are defined in JSON. Here's an
example script of a policy that limits where resources are deployed:
https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-policy-check-gate?view=vsts
https://azure.microsoft.com/en-us/services/azure-policy/
MCT USE ONLY. STUDENT USE PROHIBITED
Azure security and compliance tools and services 297
{
"properties": {
"mode": "all",
"parameters": {
"allowedLocations": {
"type": "array",
"metadata": {
"description": "The list of locations that can be
specified when deploying resources",
"strongType": "location",
"displayName": "Allowed locations"
}
}
},
"displayName": "Allowed locations",
"description": "This policy enables you to restrict the locations
your organization can specify when deploying resources.",
"policyRule": {
"if": {
"not": {
"field": "location",
"in": "[parameters('allowedLocations')]"
}
},
"then": {
"effect": "deny"
}
}
}
}
To implement these policy definitions, whether custom or built in, you will need to assign them. A policy
assignment is a policy definition that has been assigned to take place within a specific scope. This scope
could range from a management group to a resource group. Policy assignments are inherited by all child
resources. This means that if a policy is applied to a resource group, it's applied to all the resources within
that resource group. However, you can exclude a subscope from the policy assignment.
MCT USE ONLY. STUDENT USE PROHIBITED 298 Module 5 Compliance and Security
Resources that are non-compliant to a policy can be put into a compliant state
through Remediation. Remediation is accomplished by instructing the Azure Policy to run the
effect of the assigned policy on your existing resources.
: You can read more about Azure Policy on the webpage.
Initiatives
Initiatives work alongside policies in Azure Policy. An initiative definition is a set of policy definitions to
help track your compliance state for a larger goal. Even if you have a single policy, we recommend using
initiatives if you anticipate increasing the number of policies over time.
Like a policy assignment, an initiative assignment is an initiative definition assigned to a specific scope.
Initiative assignments reduce the need to make several initiative definitions for each scope. This scope
could also range from a management group to a resource group. You assign initiatives in the same way
you assign policies.
Initiative definitions
Initiative definitions simplify the process of managing and assigning policy definitions by grouping a set
of policies as one single item. For example, you could create an initiative named Enable Monitoring in
Azure Security Center, with a goal to monitor all the available security recommendations in your Azure
Security Center.
Under this initiative, you would have the following policy definitions:
●● . This policy definition is for monitoring
unencrypted SQL databases and servers.
●● . This policy definition is for monitoring servers that
do not satisfy the configured baseline.
●● . This policy definition is for monitoring
servers without an installed endpoint protection agent.
Like a policy assignment, an initiative assignment is an initiative definition assigned to a specific scope.
Initiative assignments reduce the need to make several initiative definitions for each scope. This scope
could also range from a management group to a resource group.
: You can read more about policy definition and structure at Azure Policy definition structure .
https://azure.microsoft.com/en-us/services/azure-policy/
https://docs.microsoft.com/en-us/azure/governance/policy/concepts/definition-structure
Azure security and compliance tools and services 299
https://azure.microsoft.com/en-us/services/key-vault/
OHIBPROHIBITED 300 Module 5 Compliance and Security
Examples of when you might use RBAC include when you want to:
●● Allow one user to manage VMs in a subscription, and another user to manage virtual networks.
●● Allow a database administrator (DBA) group to manage SQL Server databases in a subscription.
●● Allow a user to manage all resources in a resource group, such as VMs, websites, and subnets.
USE PRUSE
The following illustration is an example of the blade for a resource group and on
USE
the Roles tab which is displaying some of the available builtin roles.
T USE
MCT
MC
Azure security and compliance tools and services 301
When using RBAC, segregate duties within your team and grant only the amount of access to users that
they need to perform their jobs. Instead of giving everybody unrestricted permissions in your Azure
subscription or resources, allow only certain actions at a particular scope i.e. grant users the lowest
privilege level that they need to do their work.
: For more information about RBAC, visit .
Locks
Locks help you prevent accidental deletion or modification of your Azure resources. You can manage
these locks from within the Azure portal. To view, add, or delete locks, go to the section of any
resource's settings blade.
https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles
https://docs.microsoft.com/en-us/azure/role-based-access-control/overview
MCT USE ONLY. STUDENT USE PROHIBITED 302 Module 5 Compliance and Security
You may need to lock a subscription, resource group, or resource to prevent other users in your organiza-
tion from accidentally deleting or modifying critical resources. You can set the lock level to
or :
●● means authorized users can still read and modify a resource, but they can't delete the
resource.
●● means authorized users can read a resource, but they can't delete or update it. Applying
this lock is similar to restricting all authorized users to the permissions granted by the Reader role.
In the Azure portal, the locks are called *Delete
: You can read more about Locks at .
Subscription governance
When creating and managing subscriptions: Billing, Access Control and Subscription limits, there are three
main aspects to consider:
●● Billing. Billing reports can be generated by subscriptions. If, for example, you have multiple internal
departments and need to perform a chargeback, you can create subscriptions by department or
project.
●● Access Control. A subscription is a deployment boundary for Azure resources. Every subscription is
associated with an Azure Active Directory (Azure AD) tenant that provides administrators with the
ability to set up RBAC. When designing a subscription model, be sure to consider the deployment
boundary factor. Some customers have separate subscriptions for development and production, each
one (from a resource perspective) being completely isolated from the other, and managed using
RBAC.
●● Subscription Limits. Subscriptions are bound to some hard limitations. For example, the maximum
number of Azure ExpressRoute circuits per subscription is 10. You should take those limits into
consideration during the design phase. If there is a need to go over those limits in particular scenari-
os, then you might need additional subscriptions. If you hit a hard limit, there is no flexibility.
Also available to assist with managing subscriptions are management groups. Management groups
manage access, policies, and compliance across multiple Azure subscriptions. They allow you to order
your Azure resources hierarchically into collections, which provides a further level of classification above
the level of subscriptions.
In the graphic below, we can see how Azure access is divided up across different business functions, such
as HR marketing and IT and also per region. We could sub-divide this further and include subscriptions
for Dev and QA, as well as specific teams working on our pipeline such as the security team. We could
then track at a much more granular level our costs and resource usage, as well adding additional security
layers and segmenting our workloads. Tightly restricting access to the production subscriptions would
further enhance our security segmentation.
https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-lock-resources
MCT USE ONLY. STUDENT USE PROHIBITED
Azure security and compliance tools and services 303
You can manage your Azure subscriptions more effectively by using Azure Policy and Azure RBACs. These
provide distinct governance conditions that you can apply to each management group. The resources
and subscriptions you assign to a management group automatically inherit the conditions that you apply
to that management group.
: For more information about management groups and Azure, read the
page.
Azure Blueprints
Azure Blueprints enables cloud architects to define a repeatable set of Azure resources that implement
and adhere to an organization's standards, patterns, and requirements. Azure Blueprints helps develop-
ment teams rapidly build and deploy new environments with the knowledge that they're building within
organizational compliance, and with a set of built-in components that speed up development and
delivery.
https://docs.microsoft.com/en-us/azure/governance/management-groups/
MCT USE ONLY. STUDENT USE PROHIBITED 304 Module 5 Compliance and Security
Azure Blueprints is a declarative way to orchestrate deployment for various resource templates and other
artifacts, such as:
●● Role assignments
●● Policy assignments
●● Azure Resource Manager templates
●● Resource groups
The process of implementing Azure Blueprints consists of the following high-level steps:
1. Create an Azure Blueprints blueprint.
2. Assign the blueprint.
3. Track the blueprint assignments.
With Azure Blueprints, the relationship between the blueprint definition (what should be deployed) and
the blueprint assignment (what is deployed) is preserved. This connection supports improved deployment
tracking and auditing.
The blueprints in Azure Blueprints are different from Azure Resource Manager templates. When Azure
Resource Manager templates deploy resources, they have no active relationship with the deployed
resources (they exist in a local environment or in source control). By contrast, with Azure Blueprints, each
deployment is tied to an Azure Blueprints package. This means that the relationship with resources will be
maintained, even after deployment. In this way, maintaining relationships improves auditing and tracking
capabilities.
https://azure.microsoft.com/en-us/services/blueprints/
Azure security and compliance tools and services 305
https://portal.atp.azure.com
ONLY.
ONLY. STUDENT
STUDENT OHIBPROHIBITED
USE PRUSE ITED 306 Module 5 Compliance and Security
Azure ATP is available as part of the Enterprise Mobility + Security E5, and as a standalone license. You
can acquire a license directly from the page, or
USE
through the Cloud Solution Provider (CSP) licensing model. It is not available to purchase via the Azure
portal.
: For more information about Azure ATP, review the page.
T USE
MCT
https://www.microsoft.com/en-ie/cloud-platform/enterprise-mobility-security-pricing
https://azure.microsoft.com/en-us/features/azure-advanced-threat-protection/
MC
Lab 307
Labs Overview
1. If you already have a Microsoft account that has not already been used to sign up for a free Azure trial
subscription, you’re ready to get started. If not, don’t worry, just
2. After you’ve created a Microsoft account, create your . You’ll need to
sign-in with your Microsoft account if you’re not already signed in. Then you’ll need to:
●● Enter your cellphone number and have Microsoft send you a text message to verify your identity.
●● Enter the code you have been sent to verify it.
●● Provide valid payment details. This is required for verification purposes only – your credit card
won’t be charged for any services you use during the trial period, and the account is automatically
deactivated at the end of the trial period unless you explicitly decide to keep it active. For more
information, see the on the Azure sign-up page.
Like many other cloud infrastructure platforms today, Azure is being continuously developing updates to
their services and components. If you've had your own subscriptions for any length of time, you are
already aware that changes to services happen much more rapidly than with more traditional application
deployment models.
Every effort will be made to update course content where there are major changes to product functionali-
ty. However, there will be occasions where course content does not exactly match the latest version of
the product. In most cases, you should still be able to understand the tasks and complete the course. The
general guidance from the Azure documentation teams is to check the documentation frequently to see
what upcoming notifications have been posted or where documentation has been updated to reflect the
latest changes.
We encourage you to consult updates as a starting point for the latest information about
updates. From there, you can avail yourself of blogs and other resources that are provided in order to
help you stay current in a cloud-enabled world.
https://aka.ms/edx-devops200.4x-msa
https://aka.ms/edx-devops200.4x-az2
https://azure.microsoft.com/en-us/free/free-account-faq/
https://azure.microsoft.com/en-us/updates/
OHIBPROHIBITED 308 Module 5 Compliance and Security
AZ-400T05-M05-Lab Tasks
ITED
Steps for the labs are available on at the below sites under the sections
●●
●●
You should click on the link below, for the lab tasks for this module, and follow the steps outlined there
for each lab task.
STUDENT
●●
USE
T USE
MCT STUDENT
ONLY.
ONLY.
http://microsoft.github.io/PartsUnlimited/iac/200.2x-IaC-SecurityandComplianceinpipeline.html
MC
Module Review Questions 309
What component in Azure DevOps can you use to store, organize and share access to packages, and
integrate those packages them with your continuous integration and continuous delivery pipeline?
Test Plans
Azure Artifacts
Boards
Pipelines
Which of the following package types are available to use with Azure Artifacts?
(choose three)
NuGet
npm
PowerShell
Maven
OHIBPROHIBITED 310 Module 5 Compliance and Security
Which description from the list below best describes the term Software Composition Analysis?
Assessment of production hosting infrastructure just before deployment
Analyze build software to identify load capacity
Analyzing open source software (OSS) to identify potential security vulnerabilities and provide
ITED
validation that the software meets a defined criteria to use in your pipeline
Analyzing open source software after it has been deployed to production to identify security vulnera-
bilities
USE PRUSE
From where can extensions be sourced from, to be integrated into Azure DevOps CI/CD pipelines and help
provide security composition analysis??
Azure DevOps Marketplace
www.microsoft.com
Azure Security Center
STUDENT
Which products, from the below list, are available as extensions in Azure DevOps Marketplace, and can
provide either OSS or source code scanning as part of an Azure DevOps pipeline?
STUDENT
Which Azure service from the below list is a monitoring service that can provide threat protection and
security recommendations across all of your services both in Azure and on-premises?
ONLY.
Azure Policy
Azure Security Center
Azure Key vault
USE
Which Azure service should you use from the below list to monitor all unencrypted SQL databases in your
organization?
Azure Policy
Azure Security Center
Azure Key Vault
Azure Machine Learning
Which facility from the below list, allows you to prevent accidental deletion of resources in Azure?
Key Vault
Azure virtual machines
Azure Blueprints
Locks
OHIBPROHIBITED 312 Module 5 Compliance and Security
Cost management
Microservice Architecture
■■ Security
Hackathons
USE PRUSE
Explanation
DevOps and Security are the correct answers.
All other answers are incorrect.
Rugged DevOps brings together the notions of DevOps and Security. DevOps is about working faster.
Security is about emphasizing thoroughness, which is typically done at the end of the cycle, resulting in
potentially generating unplanned work right at the end of the pipeline. Rugged DevOps is a set of practices
STUDENT
designed integrate DevOps and security, and to meet the goals of both more effectively.
perimeter protection
■■ Securing the pipeline
Explanation
Securing the pipeline us the correct answer.
All other answers, while covering some elements of it security, and while being important in their own right,
do not cover what is meant by security in Rugged DevOps.
ONLY.
With Rugged DevOps, security is more about securing the pipeline, determining where you can add security
to the elements that plug into your build and release pipeline. For example, it's about how and where you
can add security to you automation practices, production environments, and other pipeline elements while
attempt to gain the speed of DevOps.
Rugged DevOps includes bigger questions such as:
Is my pipeline consuming third-party components, and if so, are they secure?
ONLY.
Are there known vulnerabilities within any of the third-party software we use?
How quickly can I detect vulnerabilities (time to detect)?
How quickly can I remediate identified vulnerabilities (time to remediate)?
MCT
MC USE
T USE
Module Review Questions 313
What component in Azure DevOps can you use to store, organize and share access to packages, and
integrate those packages them with your continuous integration and continuous delivery pipeline?
Test Plans
■■ Azure Artifacts
Boards
Pipelines
Explanation
Azure Artifacts is the correct answer. Azure Artifacts is an integral part
of the component workflow, which you can use to organize and share access to
your packages. It allows you to:
Keep your artifacts organized. Share code easily by storing Apache Maven, npm, and NuGet packages
together. You can store packages using Universal Packages, eliminating the need to store binaries in Git.
Protect your packages. Keep every public source package you use, including packages from npmjs and
nuget.org, safe in your feed where only you can delete it, and where it’s backed by the enterprise-grade
Azure SLA.
Integrate seamless package handling into your CI/CD pipeline. Easily access all your artifacts in builds and
releases. Artifacts integrate natively with the Azure Pipelines CI/CD tool.
All other answers are incorrect.
Which of the following package types are available to use with Azure Artifacts?
(choose three)
■■ NuGet
■■ npm
PowerShell
■■ Maven
Explanation
NuGet, npm and Maven are the correct answers. Powershell is not a package type and is incorrect.
Azure Artifacts allows the sharing of code easily by storing Apache Maven, npm, and NuGet packages
together. You can also store packages using Universal Packages, eliminating the need to store binaries in
Git.
OHIBPROHIBITED 314 Module 5 Compliance and Security
Which description from the list below best describes the term Software Composition Analysis?
Assessment of production hosting infrastructure just before deployment
Analyze build software to identify load capacity
■■ Analyzing open source software (OSS) to identify potential security vulnerabilities and provide
validation that the software meets a defined criteria to use in your pipeline
ITED
Analyzing open source software after it has been deployed to production to identify security vulnera-
bilities
Explanation
Analyzing open source software (OSS) to identify potential security vulnerabilities and provide validation
that the software meets a defined criteria to use in your pipeline is the correct answer.
All other answers are incorrect.
USE PRUSE
When consuming an OSS component, whether you're creating or consuming dependencies, you'll typically
want to follow these high-level steps:
From where can extensions be sourced from, to be integrated into Azure DevOps CI/CD pipelines and
help provide security composition analysis??
STUDENT
Explanation
Azure DevOps Marketplace is the correct answer. All other answers are incorrect.
Azure DevOps Marketplace is an important site for addressing Rugged DevOps issues. From here you can
integrate specialist security products into your Azure DevOps pipeline. Having a full suite of extensions that
allow seamless integration into Azure DevOps pipelines is invaluable
Which products, from the below list, are available as extensions in Azure DevOps Marketplace, and can
ONLY.
provide either OSS or source code scanning as part of an Azure DevOps pipeline?
(choose all that apply)
■■ Whitesource
■■ CheckMarx
ONLY.
Explanation
All answers are correct.
All of the listed products are available as extensions in Azure DevOps Marketplace, and can provide either
T USE
OSS or static source code scanning as part of the Azure devOps pipeline
MCT
MC
Module Review Questions 315
Which Azure service from the below list is a monitoring service that can provide threat protection and
security recommendations across all of your services both in Azure and on-premises?
Azure Policy
■■ Azure Security Center
Azure Key vault
Role-based access control
Explanation
Azure Security Center is the correct answer.All other answers are incorrect.
Azure Security Center is a monitoring service that provides threat protection across all of your services both
in Azure, and on-premises. Security Center can:
None of the other services provide a monitoring service that can provide threat protection and security
recommendations across all of your services both in Azure and on-premises
Which Azure service should you use from the below list to monitor all unencrypted SQL databases in your
organization?
■■ Azure Policy
Azure Security Center
Azure Key Vault
Azure Machine Learning
Explanation
Azure Policy is the correct answer. All other answers are incorrect.
Azure Policy is a service in Azure that you use to create, assign, and, manage policies. These policies enforce
different rules and effects over your resources, which ensures they stay compliant with your corporate stand-
ards and service-level agreements (SLAs). A policy definition expresses what to evaluate and what action to
take. For example, you could prevent VMs from deploying if they are exposed to a public IP address. You
also could prevent a particular hard disk from being used when deploying VMs to control costs.
Initiative definitions simplify the process of managing and assigning policy definitions by grouping a set of
policies as one single item. For example, you could create an initiative named Enable Monitoring in Azure
Security Center, with a goal to monitor all the available security recommendations in your Azure Security
Center. Under this initiative, you would have the following policy definitions:
OHIBPROHIBITED 316 Module 5 Compliance and Security
Which facility from the below list, allows you to prevent accidental deletion of resources in Azure?
Key Vault
Azure virtual machines
Azure Blueprints
ITED
■■ Locks
Explanation
Locks is the correct answer. All other answers are incorrect.
Locks help you prevent accidental deletion or modification of your Azure resources. You can manage these
locks from within the Azure portal. To view, add, or delete locks, go to the SETTINGS section of any re-
source's settings blade.
You may need to lock a subscription, resource group, or resource to prevent other users in your organization
USE PRUSE
from accidentally deleting or modifying critical resources. You can set the lock level to CanNotDelete or
ReadOnly.
MCT
MC USE
T USE ONLY.
ONLY. STUDENT
STUDENT