You are on page 1of 321

Microsoft

2UFLDO
Course

AZ-400T05
Implementing Application
Infrastructure
OHIBPROHIBITED 206  Module 3 Azure Compute services

Labs Overview
ITED

To complete these exercises you will require the following items:


●● a Microsoft account
●● an Azure subscription
USE PRUSE

1. If you already have a Microsoft account that you have not already used to sign up for a free Azure trial
subscription, you’re ready to get started. If not, don’t worry, you can create a new
.
2. After you’ve created a Microsoft account, create your free account. (If you’re not
already signed into your Microsoft account, you'll need to sign in now.)
STUDENT

●● Enter your cellphone number and have Microsoft send you a text message to verify your identity.
●● Enter the code you have been sent to verify your number.
●● Provide valid payment details. This is required for verification purposes only; your credit card won’t
be charged for any services you use during the trial period, and the account is automatically
deactivated at the end of the trial unless you explicitly decide to keep it active. For more informa-
STUDENT

tion, see the page.

Like many other cloud infrastructure platforms today, Azure is continuously developing updates to their
services and components. If you've had your own subscriptions for any length of time, you're already
aware that changes to services happen much more rapidly than with more traditional application deploy-
ment models.
ONLY.

Every effort will be made to update course content when there are major changes to product functionali-
ty. However, there will be occasions where course content does not exactly match the latest version of
the product. In most cases, you should still be able to understand and complete the tasks. The general
guidance from the Azure documentation teams is to check the documentation frequently to see what
upcoming notifications have been posted or where documentation has been updated to reflect the latest
ONLY.

changes.
We encourage you to consult for the latest information. From there, you can research
blogs and other provided resources to help you stay current in a cloud-enabled world.
USE
T USE

https://aka.ms/edx-devops200.4x-msa
MCT

https://aka.ms/edx-devops200.4x-az2
https://azure.microsoft.com/en-us/free/free-account-faq/
https://azure.microsoft.com/en-us/updates/
MC
Lab  207

AZ-400T05-M03-Lab Tasks

Module 3 - Azure Compute services


This course includes labs in which you can try out the techniques demonstrated in the course.
Steps for the labs are available on at the below sites under the sections
●●
●●
Click on the links below for the individual lab tasks for this module, and follow the steps outlined there
for each lab task.

●●
●●

http://microsoft.github.io/PartsUnlimited/iac/200.2x-IaCDeployApptoAppServices.html
http://microsoft.github.io/PartsUnlimited/iac/200.2x-IaCDeployApptoAKS.html
OHIBPROHIBITED 208  Module 3 Azure Compute services

Azure Compute services review questions


ITED

Which of the following Azure products provides management capabilities for applications that run across
multiple Virtual Machines, and allows for the automatic scaling of resources, and load balancing of traffic?
†† Azure Service Fabric
†† Virtual Machine Scale Sets
†† Azure Kubernetes Service
USE PRUSE

†† Virtual Network

Availability sets are made up of which of the following?


(choose two)
STUDENT

†† Update Domains
†† Azure AD Domain Services
†† Fault Domains
†† Event Domains
STUDENT

Complete the following sentence: Azure App Service is an Azure Platform-as-Service offering that is used
for... ?
†† processing events with serverless code.
†† detecting, triaging, and diagnosing issues in your web apps and services.
ONLY.

†† building, testing, releasing, and monitoring your apps from within a single software application.
†† hosting web applications, REST APIs, and mobile back ends.
ONLY.

Which of the following are features of Web App for Containers?


(choose all that apply)
USE

†† Deploys containerized applications using Docker Hub, Azure Container Registry, or private registries.
†† Incrementally deploys apps into production with deployment slots and slot swaps.
T USE

†† Scales out automatically with auto-scale.


†† Uses the App Service Log Streaming feature to allow you to see logs from your application.
MCT

†† Supports PowerShell and Win-RM for remotely connecting directly into your containers.
MC
Module Review Questions  209

Which of the following statements is best practice for Azure Functions?


†† Azure Functions should be stateful.
†† Azure Functions should be stateless.

Which of the following features are supported by Azure Service Fabric?


(choose all that apply)
†† Reliable Services
†† Reliable Actor patterns
†† Guest Executables
†† Container processes

Which of the following describe primary uses for Placement Constraints?


(choose all that apply)
†† Isolate workloads from each other
†† Control which nodes in a cluster that a service can run on
†† ‘Lift and shift’ an existing N-tier application into Azure Service Fabric.
†† Describe resources that nodes have, and that services consume, when they are run on a node.

Which of the following are network models for deploying a clusters in Azure Kubernetes Service (AKS)?
(choose two)
†† Basic Networking
†† Native Model
†† Advanced Networking
†† Resource Model

True or false: containers are a natural fit for an event-driven architecture?


†† True
†† False
OHIBPROHIBITED 210  Module 3 Azure Compute services

Which of the following cloud service models provides the most control, flexibility, and portability?
†† Infrastructure-as-a-Service (IaaS)
†† Functions-as-a-Service (FaaS)
†† Platform-as-a-Service (PaaS)
T USE
MCT
MC ONLY.
USE USE PRUSE
STUDENT
STUDENT
ONLY. ITED
Module Review Questions  211

Which of the following Azure products provides management capabilities for applications that run across
multiple Virtual Machines, and allows for the automatic scaling of resources, and load balancing of
traffic?
†† Azure Service Fabric
■■ Virtual Machine Scale Sets
†† Azure Kubernetes Service
†† Virtual Network
Explanation
Virtual Machine Scale Sets is the correct answer.
All other answers are incorrect.
Azure Service Fabric is for developing microservices and orchestrating containers on Windows or Linux.
Azure Kubernetes Service (AKS) simplifies the deployment, management, and operations of Kubernetes.
Virtual Network is for setting up and connecting virtual private networks.
With Azure VMs, scale is provided for by Virtual Machine Scale Sets (VMSS). Azure VMSS let you create and
manage groups of identical, load balanced VMs. The number of VM instances can increase or decrease
automatically, in response to demand or a defined schedule. Azure VMSS provide high availability to your
applications, and allow you to centrally manage, configure, and update large numbers of VMs. With Azure
VMSS, you can build large-scale services for areas such as compute, big data, and container workloads.

Availability sets are made up of which of the following?


(choose two)
■■ Update Domains
†† Azure AD Domain Services
■■ Fault Domains
†† Event Domains
Explanation
Update Domains and Fault Domains are the correct answers.
Azure AD Domain Services and Event Domains are incorrect answers.
Azure AD Domain Service provides managed domain services to a Windows Server Active Directory in
Azure. An event domain is a tool for managing and publishing information.
Update Domains are a logical section of the datacenter, implemented by software and logic. When a
maintenance event occurs (such as a performance update or critical security patch applied to the host), the
update is sequenced through Update Domains. Sequencing updates by using Update Domains ensures that
the entire datacenter does not fail during platform updates and patching.
Fault Domains provide for the physical separation of your workload across different hardware in the
datacenter. This includes power, cooling, and network hardware that supports the physical servers located in
server racks. If the hardware that supports a server rack becomes unavailable, only that specific rack of serv-
ers would be affected by the outage.
OHIBPROHIBITED 212  Module 3 Azure Compute services

Complete the following sentence: Azure App Service is an Azure Platform-as-Service offering that is used
for... ?
†† processing events with serverless code.
†† detecting, triaging, and diagnosing issues in your web apps and services.
†† building, testing, releasing, and monitoring your apps from within a single software application.
ITED

■■ hosting web applications, REST APIs, and mobile back ends.


Explanation
Hosting web applications, REST APIs, and mobile back ends, is the correct answer.
The other answers are incorrect because:
Processing events with serverless code is performed by Azure Functions.
Detecting, triaging, and diagnosing issues in your web apps and services is performed by Application
USE PRUSE

Insights.
Building, testing, releasing, and monitoring your apps from within a single software application is performed
by Visual Studio App Center.
Azure App Service is a Platform as Service offering on Azure, for hosting web applications, REST APIs, and
mobile back ends. With Azure App Service you can create powerful cloud apps quickly within a fully
managed platform. You can use Azure App Service to build, deploy, and scale enterprise-grade web, mobile,
STUDENT

and API apps to run on any platform. Azure App Service ensures your application meet rigorous perfor-
mance, scalability, security and compliance requirements, and benefit from using a fully managed platform
for performing infrastructure maintenance.

Which of the following are features of Web App for Containers?


STUDENT

(choose all that apply)


■■ Deploys containerized applications using Docker Hub, Azure Container Registry, or private registries.
■■ Incrementally deploys apps into production with deployment slots and slot swaps.
■■ Scales out automatically with auto-scale.
■■ Uses the App Service Log Streaming feature to allow you to see logs from your application.
ONLY.

■■ Supports PowerShell and Win-RM for remotely connecting directly into your containers.
Explanation
All of the answers are correct.
Web App for Containers from the Azure App Service allows customers to use their own containers, and
deploy them to Azure App Service as a web app. Similar to the Azure Web App solution, Web App for
ONLY.

Containers eliminates time-consuming infrastructure management tasks during container deployment,


updating, and scaling to help developers focus on coding and getting their apps to their end users faster.
Furthermore, Web App for Containers provides integrated CI/CD capabilities with DockerHub, Azure
Container Registry, and VSTS, as well as built-in staging, rollback, testing-in-production, monitoring, and
USE

performance testing capabilities to boost developer productivity.


For Operations, Web App for Containers also provides rich configuration features so developers can easily
add custom domains, integrate with AAD authentication, add SSL certificates and more — all of which are
T USE

crucial to web app development and management. Web App for Containers provides an ideal environment
to run web apps that do not require extensive infrastructure control.
MCT
MC
Module Review Questions  213

Which of the following statements is best practice for Azure Functions?


†† Azure Functions should be stateful.
■■ Azure Functions should be stateless.
Explanation
Azure Functions should be stateless is the correct answer.
Azure Functions should be stateful is an incorrect answer.
Azure Functions are an implementation of the Functions-as-a-Service programming model on Azure, with
additional capabilities. It is best practice to ensure that your functions are as stateless as possible. Stateless
functions behave as if they have been restarted, every time they respond to an event. You should associate
any required state information with your data instead. For example, an order being processed would likely
have an associated state member. A function could process an order based on that state, update the data as
required, while the function itself remains stateless. If you require stateful functions, you can use the Durable
Functions Extension for Azure Functions or output persistent data to an Azure Storage service.

Which of the following features are supported by Azure Service Fabric?


(choose all that apply)
■■ Reliable Services
■■ Reliable Actor patterns
■■ Guest Executables
■■ Container processes
Explanation
All of the answers are correct.
Reliable Services is a framework for creating services that use specific features provided by Azure Service
Fabric. The two distinct types of Reliable Services you can create are stateless services and stateful services.
Reliable Actors is a framework built on top of Reliable Services which implements the Virtual Actors design
pattern. An Actor encapsulates a small piece of a state or behavior. The state of an Actor can be volatile, or
it can be kept persistent in a distributed store. This store can be memory-based or on a disk.
Guest Executables are existing applications that you package and run as Service Fabric services (stateless).
This makes the applications highly available, as Service Fabric keeps the instances of your applications
running. Applications can be upgraded with no downtime, and Service Fabric can automatically roll back
deployments if needed.
Containers can be run in a way that is similar to running guest executables. Furthermore, with containers,
Service Fabric can restrict resource consumption per container (by CPU processes or memory usage, for
example). Limiting resource consumption per service allows you to achieve higher densities on your cluster.

Which of the following describe primary uses for Placement Constraints?


STUDENT USE PROHIBITED 214  Module 3 Azure Compute services

(choose all that apply)


■■ Isolate workloads from each other
■■ Control which nodes in a cluster that a service can run on
■■ ‘Lift and shift’ an existing N-tier application into Azure Service Fabric.
†† Describe resources that nodes have, and that services consume, when they are run on a node.
Explanation
The correct answers are: Isolate workloads from each other, control which nodes in a cluster that a service
can run on, and ‘Lift and shift’ an existing N-tier application into Azure Service Fabric.
Describe resources that nodes have, and that services consume, when they are run on a node, is an incor-
rect answer. Metrics are used to describe resources that nodes have, and services consume, when they are
run on a node.
Placement Constraints can control which nodes in a cluster that a service can run on. You can define any
set of properties by node type, and then set constraints for them. Placement Constraints are primarily used
to: Isolate workloads from each other; 'Lift and shift' an existing N-tier application into Azure Service Fabric;
Run services on specific server configurations.
Placement Constraints can restrict Service Fabric's ability to balance overall cluster resource consumption.
Make sure that your Placement Constraints are not too restrictive. Otherwise, if Service Fabric cannot
comply with a Placement Constraint, your service will not run.

Which of the following are network models for deploying a clusters in Azure Kubernetes Service (AKS)?
(choose two)
■■ Basic Networking
†† Native Model
■■ Advanced Networking
†† Resource Model
Explanation
Basic Networking and Advanced Networking are correct answers.
Native Model and Resource Model are incorrect answers because these are two deployment models
supported by Azure Service Fabric.
PR ONLY.

In AKS, you can deploy a cluster to use either Basic Networking or Advanced Networking. With Basic
Networking, the network resources are created and configured as the AKS cluster is deployed. Basic Net-
working is suitable for small development or test workloads, as you don't have to create the virtual network
and subnets separately from the AKS cluster. Simple websites with low traffic, or to lift and shift workloads
into containers, can also benefit from the simplicity of AKS clusters deployed with Basic Networking.
With Advanced Networking, the AKS cluster is connected to existing virtual network resources and configu-
ED

rations. Advanced Networking allows for the separation of control and management of resources. When you
NT USE OHIBT

use Advanced Networking, the virtual network resource is in a separate resource group to the AKS cluster.
USE . Y ONL STUDEUSE

For most production deployments, you should plan for and use Advanced Networking.
MCT
MCT
Module Review Questions  215

True or false: containers are a natural fit for an event-driven architecture?


†† True
■■ False
Explanation
False is the correct answer.
True is an incorrect answer.
Architecture styles don't require the use of particular technologies, but some technologies are well-suited for
certain architectures. For example, containers are a natural fit for microservices, and an event-driven
architecture is generally best suited to IoT and real-time systems.
An N-tier architecture model is a natural fit for migrating existing applications that already use a layered
architecture
A Web-queue-worker architecture model is suitable for relatively simple domains with some resource-inten-
sive tasks.
The CQRS architecture model makes the most sense when it's applied to a subsystem of a larger architec-
ture.
A Big data architecture model divides a very large dataset into chunks, performing paralleling processing
across the entire set, for analysis and reporting.
Finally, the Big compute architecture model, also called high-performance computing (HPC), makes parallel
computations across a large number (thousands) of cores.

Which of the following cloud service models provides the most control, flexibility, and portability?
■■ Infrastructure-as-a-Service (IaaS)
†† Functions-as-a-Service (FaaS)
†† Platform-as-a-Service (PaaS)
Explanation
Infrastructure-as-a-Service (IaaS) is the correct answer.
Functions-as-a-Service (FaaS) and Platform-as-a-Service (PaaS) are incorrect answers.
Of the three cloud service models mentioned, IaaS provides the most control, flexibility, and portability.
FaaS provides simplicity, elastic scale, and potential cost savings, because you pay only for the time your
code is running. PaaS falls somewhere between the two.
Module 4 Third Party and Open Source Tool
integration with Azure

Lesson Overview
This lesson includes the following topics:
●● What is Chef
●● Chef Automate
●● Chef Cookbooks
●● Chef Knife command

What is Chef
Chef is an infrastructure automation tool that you use for deploying, configuring, managing, and ensuring
compliance of applications and infrastructure. It provides for a consistent deployment and management
experience.
Chef helps you to manage your infrastructure in the cloud, on-premises, or in a hybrid environment by
using instructions (or recipes) to configure nodes, a node , or chef-client, being any machine, physical or
virtual, cloud or network device that is under management by Chef.
The following diagram is of the high-level Chef architecture:
OHIBPROHIBITED
USE PRUSE
STUDENT
STUDENT ITED 218  Module 4 Third Party and Open Source Tool integration with Azure

Chef has three main architectural components:


ONLY.

●● Chef Server. This is the management point, which has two options for the Chef Server: a hosted
solution, and an on-premises solution.
●● Chef Client (node). This is a Chef agent that resides on the servers you are managing.
●● Chef Workstation. This is the Admin workstation where you create policies and execute management
ONLY.

commands. You run the command from the Chef Workstation to manage your infrastructure.
There are also the concepts of Chef cookbooks and Recipes. These are essentially the policies that you
define and apply to your servers.
USE

Chef Automate
You can deploy Chef on Microsoft Azure from the Azure Marketplace using the Chef Automate image.
T USE

Chef Automate is a Chef product that allows you to package and test your applications, and provision and
update your infrastructure. Using Chef, you can manage all of it with compliance and security checks, and
MCT

dashboards that give you visibility into your entire stack.


MC
Chef with Azure  219

The Chef Automate image is available on the Azure Chef Server and has all the functionality of the legacy
Chef Compliance server. You can build, deploy, and manage your applications and infrastructure on
Azure. Chef Automate is available from the Azure Marketplace, and you can try it out with a free 30-day
license. You can deploy it in Azure straight away.

Chef Automate integrates with the open-source products Chef, InSpec, and Habitat, and their associated
tools, including chef-client and ChefDK. The following image is an overview of the structure of Chef
Automate, and how it functions.

●● Habitat. Habitat is an open-source project that offers an entirely new approach to application
management. It makes the application and its automation the unit of deployment by creating plat-
form-independent build artifacts that can run on traditional servers and virtual machines (VMs). They
also can be exported into your preferred container platform, enabling you to deploy your applications
in any environment. When applications are wrapped in a lightweight “habitat” (the runtime environ-
ment), whether the habitat is a container, a bare metal machine, or platform as a service (PaaS) is no
longer the focus and does not constrain the application.
For more information about Habitat, see the page.
●● InSpec. InSpec is a free and open-source framework for testing and auditing your applications and
infrastructure. InSpec works by comparing the actual state of your system with the desired state that
you express in easy-to-read and easy-to-write InSpec code. InSpec detects violations and displays
findings in the form of a report, but puts you in control of remediation.
You can use InSpec to validate the state of your VMs running in Azure. You can also use InSpec to scan
and validate the state of resources and resource groups inside a subscription.

https://docs.microsoft.com/en-us/azure/chef/chef-habitat-overview
OHIBPROHIBITED 220  Module 4 Third Party and Open Source Tool integration with Azure

More information about InSpec is available at


.

Chef Cookbooks
Chef uses a cookbook to define a set of commands that you execute on your managed client. A cookbook
is a set of tasks that you use to configure an application or feature. It defines a scenario and everything
ITED

required to support that scenario. Within a cookbook, there are a series of recipes, which define a set of
actions to perform. Cookbooks and recipes are written in the Ruby language.
After you create a cookbook, you can then create a Role. A Role defines a baseline set of cookbooks and
attributes that you can apply to multiple servers. To create a cookbook, you use the
command.
USE PRUSE

Before creating a cookbook, you first configure your Chef Workstation by setting up the Chef Develop-
ment Kit on your local workstation. You'll use the Chef workstation to connect to and manage your Chef
server.
: You can download and install the Chef Development Kit from
STUDENT

.
Choose the Chef Development Kit that is appropriate to your operating system and version. For example:
●● macOSX/macOS
●● Debian
●● Red Hat Enterprise Linux SUSE
STUDENT

●● Linux Enterprise Server


●● Ubuntu
●● Windows
1. Installing the Chef Development Kit creates the Chef workstation automatically in your
directory. After installation completes, run the following example command that calls the Cookbook
web server for a policy that automatically deploys IIS:
ONLY.

chef generate cookbook webserver

1. This command generates a set of files under the directory . Next, you
need to define the set of commands that you want the Chef client to execute on your managed virtual
ONLY.

machine. The commands are stored in the file.


2. For this example, a set of commands will be defined that installs and starts Microsoft Internet Infor-
mation Services (IIS), and copies a template file to the folder. Modify the
USE

file by adding the following lines:


powershell_script 'Install IIS' do
T USE

action :run
MCT

https://docs.microsoft.com/en-us/azure/chef/chef-inspec-overview
https://downloads.chef.io/chefdk
MC
MCT USE ONLY. STUDENT USE PROHIBITED
Chef with Azure  221

code 'add-windowsfeature Web-Server'

end

service 'w3svc' do

action [ :enable, :start ]

end

template 'c:\inetpub\wwwroot\Default.htm' do

source 'Default.htm.erb'

rights :read, 'Everyone'

end

3. Save the file after you are done.


4. To generate the template, run the following command:
chef generate template webserver Default.htm

5. Now navigate to the file. Edit


the file by adding some simple HTML code, and then save the file.
6. Run the following command to upload the cookbook to the Chef Server so that it appears under the
tab:
chef generate template webserver Default.htm

We have now created our cookbook and it's ready to use.


The next steps (which we will not be covering at this time), would be to:
1. Create a role to define a baseline set of cookbooks and attributes that you can apply to multiple
servers.
2. Create a node to deploy the configuration to the machine you want to configure.
3. Bootstrap the machine using Chef to add the role to the node that deployed the configuration to the
machine.

Chef Knife command


Knife is a command that's available from the command line. It's made available as part of the Chef
Development Kit installation. You can use the command to complete a wide variety of tasks, such
as:
●● Generate a cookbook template. You do this by running the following command:
chef generate cookbook < cookbook name >

●● Upload your cookbooks and recipes to the Chef Automate server using the following command:
OHIBPROHIBITED 222  Module 4 Third Party and Open Source Tool integration with Azure

knife cookbook upload < cookbook name> --include-dependencies

●● Create a role to define a baseline set of cookbooks and attributes that you can apply to multiple
servers. Use the following command to create this role:
knife role create < role name >
ITED

●● Bootstrap the a node or client and assign a role using the following command:
knife bootstrap < FQDN-for-App-VM > --ssh-user <app-admin-username>
--ssh-password <app-vm-admin-password> --node-name < node name > --run-
list role[ < role you defined > ] --sudo --verbose

You can also bootstrap Chef VM extensions for the Windows and Linux operating systems, in addition to
USE PRUSE

provisioning them in Azure using the command. For more information, look up the ‘cloud-api’
bootstrap option in the Knife plugin documentation at .
: You can also install the Chef extensions to an Azure VM using Windows PowerShell. By installing
the Chef Management Console, you can manage your Chef server configuration and node deployments
via a browser window.
T USE
MCT ONLY.
USE STUDENT
STUDENT
ONLY.

https://github.com/chef/knife-azure
MC
Puppet with Azure  223

Lesson Overview
This lesson includes the following topics:
●● What is Puppet
●● Deploying Puppet in Azure
●● Manifest files

What is Puppet
Puppet is a deployment and configuration management toolset that provides you with enterprise tools
that you need to automate an entire lifecycle on your Azure infrastructure. It also provides consistency
and transparency into infrastructure changes.
Puppet provides a series of open-source configuration management tools and projects. It also provides
Puppet Enterprise, which is a configuration management platform that allows you to maintain state in
both your infrastructure and application deployments.

Puppet operates using a client server model, and consists of the following core components:
●● Puppet Master. The Puppet Master is responsible for compiling code to create agent catalogs. It's also
where Secure Sockets Layer (SSL) certificates are verified and signed. Puppet Enterprise infrastructure
components are installed on a single node, the master. The master always contains a compile master
and a Puppet Server. As your installation grows, you can add additional compile masters to distribute
the catalog compilation workload.
●● Puppet Agent. Puppet Agent is the machine (or machines) managed by the Puppet Master. An agent
that is installed on those managed machines allows them to be managed by the Puppet Agent.
●● Console Services. Console Services are the web-based user interface for managing your systems.
●● Facts. Facts are metadata related to state. Puppet will query a node and determine a series of facts,
which it then uses to determine state.

Deploying Puppet in Azure


Puppet Enterprise lets you automate the entire lifecycle of your Azure infrastructure simply, scalably, and
securely, from initial provisioning through application deployment.
Puppet Enterprise is available to install directly into Azure using the . The Puppet
Enterprise image allows you to manage up to 10 Azure VMs for free, and is available to use immediately.

https://azure.microsoft.com/en-us/marketplace/
MCT USE ONLY. STUDENT USE PROHIBITED 224  Module 4 Third Party and Open Source Tool integration with Azure

After you select it, you need to fill in the VM's parameter values. A preconfigured system will then run
and test Puppet, and will preset many of the settings. However, these can be changed as needed. The VM
will then be created, and Puppet will run the install scripts.
Another option for creating a Puppet master in Azure is to install a Linux VM in Azure and deploy the
Puppet Enterprise package manually.

Manifest files
Puppet uses a declarative file syntax to define state. It defines what the infrastructure state should be, but
not how it should be achieved. You must tell it you want to install a package, but not how you want to
install the package.
Configuration or state, is defined in manifest files known as Puppet Program files. These files are responsi-
ble for determining the state of the application, and have the file extension .
Puppet program files have the following elements:
●● . A bucket that you put resources into. For example, you might have an class with
everything required to run Apache (such as the package, config file. running server, and any users that
need to be created). That class then becomes an entity that you can use to compose other workflows.
●● . A single elements of your configuration that you can specify parameters for.
●● . The collection of all the classes, resources, and other elements of the Puppet program file in
a single entity.

Sample manifest (.pp) file


In the following sample .pp file, notice where classes are being defined, and within that, resources and
package details are defined.
: The -> notation is an “ordering arrow”: it tells Puppet that it must apply the “left” resource before
invoking the “right” resource. This allows us to specify order when necessary:

class mrpapp {
class { 'configuremongodb': }
class { 'configurejava': }
}

class configuremongodb {
include wget
class { 'mongodb': }->

wget::fetch { 'mongorecords':
source => 'https://raw.githubusercontent.com/Microsoft/PartsUnlimitedM-
RP/master/deploy/MongoRecords.js',
destination => '/tmp/MongoRecords.js',
timeout => 0,
}->
exec { 'insertrecords':
command => 'mongo ordering /tmp/MongoRecords.js',
path => '/usr/bin:/usr/sbin',
unless => 'test -f /tmp/initcomplete'
MCT USE ONLY. STUDENT USE PROHIBITED
Puppet with Azure  225

}->
file { '/tmp/initcomplete':
ensure => 'present',
}
}

class configurejava {
include apt
$packages = ['openjdk-8-jdk', 'openjdk-8-jre']
apt::ppa { 'ppa:openjdk-r/ppa': }->
package { $packages:
ensure => 'installed',

}
}

: You can download customer Puppet modules that have been created by Puppet and the Puppet
community from .
Puppetforge is a community repository that contains thousands of modules for download and use, or
modification as you need. This saves you the time necessary to recreate modules from scratch.

https://forge.puppet.com/
OHIBPROHIBITED 226  Module 4 Third Party and Open Source Tool integration with Azure

Lesson Overview
This lesson includes the following topics:
●● What is Ansible
ITED

●● Ansible components
●● Installing Ansible
●● Ansible on Azure
●● Playbook structure
●● Run Ansible in Azure Cloud Shell
USE PRUSE

●● Run Ansible in Visual Studio Code

What is Ansible
Ansible is an open-source platform that automates cloud provisioning, configuration management, and
application deployments. Using Ansible, you can provision VMs, containers, and complete cloud infra-
STUDENT

structure. In addition to provisioning and configuring applications and their environments, Ansible allows
you to automate deployment and configuration of resources in your environment such as virtual net-
works, storage, subnets, and resources groups.
Ansible is designed for multi-tier deployments. Unlike Puppet or Chef, Ansible is agentless, so you do not
have to install software on the managed machines.
STUDENT

Ansible also models your IT infrastructure by describing how all of your systems interrelate, rather than
managing just one system at a time.

Ansible Components
The following workflow and component diagram outlines how playbooks can be run in different circum-
stances, one after another. In the workflow, Ansible playbooks:
ONLY.

1. Provision resources. Playbooks can provision resources. In the following diagram, playbooks create
load-balancer virtual networks, network security groups, and VM scale sets on Azure.
2. Configure the application. Playbooks can deploy applications to run particular services, such as
installing Tomcat on a Linux machine to allow you to run a web application.
3. Manage future configurations to scale. Playbooks can alter configurations by applying playbooks to
ONLY.

existing resources and applications, in this instance to scale the virtual machines.
In all cases Ansible makes use of core components such as Roles, modules, APIs, plugins, inventory and
other components.
MCT
MC USE
T USE
Ansible with Azure  227

: By default, Ansible manages machines using the ssh protocol.


: You do not need to maintain and run commands from any particular central server. Instead, there is
a control machine that has Ansible installed on and from which playbooks are run.

Ansible models your IT infrastructure by describing how all of your systems inter-relate, rather than just
managing one system at a time. The core components of Ansible are as follows:
●● . This is the machine from which the configurations are run. It can be any machine
with Ansible installed on it. However, it requires Python2 or Python 3 to be installed on the control
machine as well. You can have multiple control nodes, laptops, shared desktops, and servers all
running Ansible.
●● . These are the devices and/or machines and environments that are being managed.
Managed nodes can also be referred to as hosts. Ansible is not installed on nodes.
●● . Playbooks are ordered lists of tasks that have been saved so you can run them in the
same order repeatedly. Playbooks are Ansible’s language for configuration, deployment, and orches-
tration. They can describe a policy you want your remote systems to enforce, or they can dictate a set
of steps in a general IT process.
●● When you create a playbook, you do so using YAML, which defines a model of a configuration or
process, and uses a declarative model. Elements such as , , and reside within play-
books.
●● . Ansible works by connecting to your nodes and then pushing out to the nodes small
programs (or units of code), called modules. Modules are the units of code that define the configura-
tion. They are modular, and can be re-used across playbooks. They represent the desired state of the
system (declarative), are executed over SSH by default, and are removed when finished.
●● A playbook is typically made up of many modules. For example, you could have one playbook
containing three modules: a module for creating an Azure Resource group, a module for creating a
virtual network, and a module for adding a subnet.
●● Your library of modules can reside on any machine, and do not require any servers, daemons, or
databases. Typically, you’ll work with your favorite terminal program, a text editor, and most likely a
OHIBPROHIBITED 228  Module 4 Third Party and Open Source Tool integration with Azure

version control system to keep track of changes to your content. A complete list of available modules
is available on Ansible's page.
●● You can find details of the Ansible modules that are available for Azure on GitHub at
and also preview Ansible Azure
modules on the page.
●● . Inventory is a list of managed nodes. Ansible represents what machines it manages using a
.INI file that puts all of your managed machines in groups of your own choosing. When adding new
ITED

machines, you do not need to use additional SSL-signing servers, thus avoiding Network Time
Protocol (NTP) and Domain Name System (DNS) issues. You can create the inventory manually, or for
Azure, Ansible supports dynamic inventories, which means that the host inventory is dynamically
generated at runtime. Ansible supports host inventories for other managed hosts as well.
●● . Roles are predefined file structures that allow automatic loading of certain variables, files, tasks,
and handlers, based on the file's structure. It allows for easier sharing of roles. You might, for example,
USE PRUSE

create roles for a web server deployment.


●● . Facts are data points about the remote system that Ansible is managing. When a playbook is
run against a machine, Ansible will gather facts about the state of the environment to determine the
state before executing the playbook.
●● . Plug-ins are code that supplements Ansible's core functionality.
STUDENT

Installing Ansible
To enable a machine to act as the control machine from which to run playbooks, you need to install both
Python and Ansible.
STUDENT

When you install Python, you must install either Python 2 (version 2.7), or Python 3 (versions 3.5 and
higher). You can use pip, the Python package manager, to install Python, or you can use other installation
methods.
ONLY.

An Ansible installation has the following characteristics:


●● You only need to install Ansible on one machine, which could be workstation or a laptop—you can
manage an entire fleet of remote machines from that central point.
●● No database is installed as part of the Ansible setup.
ONLY.

●● No daemons are required to start or keep running.


USE

You can install Ansible in many different distributions of Linux, including, but not limited to, those in the
following list:
T USE

●● Red Hat Enterprise Linux


MCT

https://docs.ansible.com/ansible/latest/modules/list_of_all_modules.html
https://github.com/ansible/ansible/tree/devel/lib/ansible/modules/cloud/azure
https://galaxy.ansible.com/Azure/azure_preview_modules
MC
MCT USE ONLY. STUDENT USE PROHIBITED
Ansible with Azure  229

●● CentOS
●● Debian
●● Ubuntu
●● Fedora
Note: Fedora is not supported as an endorsed Linux distribution on Azure. However, it can be run on
Azure by uploading your own image. All other Linux distributions are supported on Azure as endorsed by
Linux.
You can use the appropriate package manager software to install Ansible and Python, such as yum, apt,
or pip. For example, To install Ansible on Ubuntu, run the following commands:
## Install pre-requisite packages
sudo apt-get update && sudo apt-get install -y libssl-dev libffi-dev py-
thon-dev python-pip

## Install Ansible and Azure SDKs via pip


sudo pip install ansible[azure]

You can also install Ansible and Python on macOS, and use that environment as the control machine.

You cannot install Ansible on the Windows operating system. However, you can run playbooks from a
Windows machine by utilizing other products and services. You can install Ansible and Python on operat-
ing systems such as:
●● Windows Subsystem for Linux. Windows Subsystem for Linux is an Ubuntu Linux environment availa-
ble as part of Windows.
●● Azure Cloud Shell. Use Azure Cloud Shell via a web browser on a Windows machine.
●● Microsoft Visual Studio Code. Using Visual Studio Code, choose one of the following options:

●● Run Ansible playbook in Docker.


●● Run Ansible playbook on local Ansible.
●● Run Ansible playbook in Azure Cloud Shell.
●● Run Ansible playbook remotely via SSH.

When Ansible manages remote machines, it does not leave software installed or running on them.
Therefore, there’s no real question about how to upgrade Ansible when moving to a new version.
OHIBPROHIBITED 230  Module 4 Third Party and Open Source Tool integration with Azure

When managing nodes:


●● On the managed nodes or environments you need a way to communicate, which is normally using
SSH by default. This uses the SSH file transfer protocol. If that’s not available, you can switch to secure
copy protocol (scp), which you can do in . For Windows machines, use PowerShell.
: You can find out more about installing Ansible on the
ITED

page.

Ansible on Azure
There are a number of ways you can use Ansible in Azure.
USE PRUSE

You can use one of the following images available as part of the Azure Marketplace:
●● Red Hat Ansible on Azure is available as an image on Azure Marketplace, and it provides a fully
configured version. This enables easier adoption for those looking to use Ansible as their provisioning
and configuration management tool. This solution template will install Ansible on a Linux VM along
STUDENT

with tools configured to work with Azure. This includes:


●● Ansible (the latest version by default. You can also specify a version number.)
●● Azure CLI 2.0
●● MSI VM extension
●● apt-transport-https
STUDENT

●● Ansible Tower (by Red Hat). Ansible Tower by Red Hat helps organizations scale IT automation and
manage complex deployments across physical, virtual, and cloud infrastructures. Built on the proven
open-source Ansible automation engine, Ansible Tower includes capabilities that provide additional
levels of visibility, control, security, and efficiency necessary for today's enterprises. With Ansible
Tower you can:
●● Provision Azure environments with ease using pre-built Ansible playbooks.
ONLY.

●● Use role-based access control (RBAC) for secure, efficient management.


●● Maintain centralized logging for complete auditability and compliance.
●● Utilize the large community of content available on Ansible Galaxy.
This offering requires the use of an available Ansible Tower subscription eligible for use in Azure. If you
ONLY.

don't currently have a subscription, you can obtain one directly from Red Hat.
USE

Another option for running Ansible on Azure is to deploy a Linux VM on Azure virtual machines, which is
infrastructure as a service (IaaS). You can then install Ansible and the relevant components, and use that
as the control machine.
T USE
MCT

https://docs.microsoft.com/en-us/azure/virtual-machines/linux/ansible-install-configure?toc=%2Fen-us%2Fazure%2Fansible%2Ftoc.
json&bc=%2Fen-us%2Fazure%2Fbread%2Ftoc.json
MC
Ansible with Azure  231

: The Windows operating system is not supported as a control machine. However, you can run
Ansible from a Windows machine by utilizing other services and products such as Windows Subsystem
for Linux, Azure Cloud Shell, and Visual Studio Code.
For more details about running Ansible in Azure, visit:
●● website.
●● .

Playbook structure
Playbooks are the language of Ansible's configurations, deployments, and orchestrations. You use them
to manage configurations of and deployments to remote machines. Playbooks are structured with YAML
(a data serialization language), and support variables. Playbooks are declarative and include detailed
information regarding the number of machines to configure at a time.

Yaml is based around the structure of key value pairs. In the following example, Key is name, and the
name is mynamevalue:
name: mynamevalue

Indentations and new lines are used to separate key value pairs.
In the YAML syntax there is no definition on how to space indentation. You can indent as many spaces as
you want. However, the indentations must be uniform throughout the file, and commands must occur at
the same level of indentation spaces.
When there is indentation in YAML, that identified value is the value for the parent key. If your parent key
already has a value, then you cannot indent.

The following list is of some of the playbook components:


●● name. The name of the playbook, can be any name you wish.
●● hosts. Lists where the configuration is applied, or machines being targeted. Hosts can be a list of one
or more groups or host patterns separated by colons. It can also contain groups such as web servers
or databases, providing you defined these groups in your inventory.
●● connection. Specifies the connection type.
●● remote_user. Specifies the user to use to connect to complete the tasks.
●● var. Allows you for the definition of variables that can be used throughout your playbook.
●● gather_facts. Determines whether to gather node data or not. Can be yes or no.
●● tasks. Indicates the start of the modules where the actual configuration is defined.

https://docs.microsoft.com/en-us/azure/ansible/?ocid=AID754288&wt.mc_id=CFID0352
https://docs.ansible.com/ansible/latest/scenario_guides/guide_azure.html
USEPROHIBITED
PROHIBITE
232  Module 4 Third Party and Open Source Tool integration with Azure

You run a playbook using the following command:


ansible-playbook < playbook name >

You can also check the syntax of a playbook using the following command.
ansible-playbook --syntax-check

This will run the playbook file through the parser to ensure its included items such as files and roles, and
it has no syntax problems. You can also use the --verbose command.
●● To see what hosts would be affected by running a playbook you can run the command:
ansible-playbook playbook.yml --list-hosts
STUDENTUSE

The following code is a sample playbook that will create a Linux virtual machine in Azure:
ONLY.STUDENT

- name: Create Azure VM


hosts: localhost
connection: local
vars:
resource_group: ansible_rg5
location: westus
tasks:
- name: Create resource group
azure_rm_resourcegroup:
name: "{{ resource_group }}"
location: "{{ location }}"
- name: Create virtual network
azure_rm_virtualnetwork:
resource_group: myResourceGroup
USE ONLY.

name: myVnet
address_prefixes: "10.0.0.0/16"
- name: Add subnet
azure_rm_subnet:
resource_group: myResourceGroup
name: mySubnet
address_prefix: "10.0.1.0/24"
virtual_network: myVnet
- name: Create public IP address
MCT USE

azure_rm_publicipaddress:
resource_group: myResourceGroup
allocation_method: Static
name: myPublicIP
register: output_ip_address
- name: Dump public IP for VM which will be created
debug:
MCT

msg: "The public IP is {{ output_ip_address.state.ip_address }}."


MCT USE ONLY. STUDENT USE PROHIBITED
Ansible with Azure  233

- name: Create Network Security Group that allows SSH


azure_rm_securitygroup:
resource_group: myResourceGroup
name: myNetworkSecurityGroup
rules:
- name: SSH
protocol: Tcp
destination_port_range: 22
access: Allow
priority: 1001
direction: Inbound
- name: Create virtual network inteface card
azure_rm_networkinterface:
resource_group: myResourceGroup
name: myNIC
virtual_network: myVnet
subnet: mySubnet
public_ip_name: myPublicIP
security_group: myNetworkSecurityGroup
- name: Create VM
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: myVM
vm_size: Standard_DS1_v2
admin_username: azureuser
ssh_password_enabled: false
ssh_public_keys:
- path: /home/azureuser/.ssh/authorized_keys
key_data: <your-key-data>
network_interfaces: myNIC
image:
offer: CentOS
publisher: OpenLogic
sku: '7.5'
version: latest

: Ansible Playbook samples for Azure are available on GitHub on the


page.

Walkthrough-Run Ansible in Azure Cloud Shell


You can run Ansible playbooks on a Windows machine by using the Azure Cloud Shell with Bash. This is
the quickest and easiest way to begin using playbook's provisioning and managing in Azure.

Azure Cloud Shell, has Ansible preinstalled. After you are signed into Azure Cloud Shell, specify the bash
console. (You do not have to install or configure anything to be able or run Ansible.)

https://github.com/Azure-Samples/ansible-playbooks
MCT USE ONLY. STUDENT USE PROHIBITED 234  Module 4 Third Party and Open Source Tool integration with Azure

You can also use the editor included with Azure Cloud Shell to view, open, and edit your playbook .yml
files. You can open the editor by clicking on the curly brackets icon in the taskbar at the top of Azure
Cloud Shell.
Ansible with Azure  235

The following steps outline how to create a resource group in Azure using Ansible in Azure Cloud Shell
with bash:
1. Go to the Azure Cloud Shell at , or launch Azure Cloud Shell from within the
Azure portal by clicking on the Azure PowerShell icon in the taskbar's top, left corner.
2. Authenticate to Azure by entering your credentials if prompted.
3. Ensure is selected as the shell, in the taskbar's top, left corner.
4. Create a new file using the following command:
vi rg.yml

5. Enter insert mode by selecting the key.


6. Copy and paste the following code into the file, and remove the comment character, #, it is included
here ot allow it display properly in the learning platform. The code should look like and be aligned as
it is in the earlier screenshot.
#---
- hosts: localhost
connection: local
tasks:
- name: Create resource group
azure_rm_resourcegroup:
name: ansible-rg
location: eastus

7. Exit insert mode by selecting the key.


8. Save the file and exit the editor by entering the following command:
:wq

9. Run the playbook by running the following command:


ansible-playbook rg.yml

10. Verify that you receive output similar to the following code:
PLAY [localhost] **********************************************************
***********************

TASK [Gathering Facts] ****************************************************


***********************
ok: [localhost]

TASK [Create resource group] **********************************************


***********************
changed: [localhost]

https://shell.azure.com
OHIBPROHIBITED 236  Module 4 Third Party and Open Source Tool integration with Azure

TASK [debug] **************************************************************


***********************
ok: [localhost] => {
"rg": {
"changed": true,
"contains_resources": false,
"failed": false,
ITED

"state": {
"id": "/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/
resourceGroups/ansible-rg",
"location": "eastus",
"name": "ansible-rg",
"provisioning_state": "Succeeded",
"tags": null
USE PRUSE

}
}
}

PLAY RECAP ****************************************************************


***********************
STUDENT

localhost : ok=3 changed=1 unreachable=0 failed=0

11. Open Azure portal and verify the resource group now displays in the portal.

Walkthrough-Run Ansible in Visual Studio Code


STUDENT

You can also run playbooks on a Windows machine using Visual Studio Code. This leverages
other services that can also be integrated using Visual Studio Code.

Complete the following steps to create network resources in Azure using Visual Studio Code:
ONLY.

1. If not already installed, install Visual Studio Code by downloading it from the
page. You can install it on the Windows, Linux, or macOS operating systems.
2. Go to > > .
3. Search for and install the extension .
T USE
MCT ONLY.
USE

https://code.visualstudio.com/
MC
MCT USE ONLY. STUDENT USE PROHIBITED
Ansible with Azure  237

4.
5. Search for and install the extension .
MCT USE ONLY. STUDENT USE PROHIBITED 238  Module 4 Third Party and Open Source Tool integration with Azure

6.
7. You can also view details of this extension on the Visual Studio Marketplace page.
8. In Visual Studio Code, go to > . Alternatively, you can select the
(cog) icon in the bottom, left corner of the window, and then select
.

9.
10. In the Command Palette, Type , select** Azure:Sign in**.

https://marketplace.visualstudio.com/items?itemName=vscoss.vscode-ansible&ocid=AID754288&wt.mc_id=CFID0352
MCT USE ONLY. STUDENT USE PROHIBITED
Ansible with Azure  239

11.
12. When a browser launches and prompts you to sign in, select your Azure account. Verify that a mes-
sage displays stating that you are now signed in and can close the page.

13.
14. Verify that your Azure account now displays at the bottom of the Visual Studio Code window.
15. Create a new file and paste in the following playbook text:
- name: Create Azure VM
hosts: localhost
connection: local
tasks:
- name: Create resource group
azure_rm_resourcegroup:
name: myResourceGroup
location: eastus
- name: Create virtual network
azure_rm_virtualnetwork:
resource_group: myResourceGroup
name: myVnet
address_prefixes: "10.0.0.0/16"
- name: Add subnet
azure_rm_subnet:
resource_group: myResourceGroup
name: mySubnet
address_prefix: "10.0.1.0/24"
virtual_network: myVnet
- name: Create public IP address
azure_rm_publicipaddress:
resource_group: myResourceGroup
OHIBPROHIBITED 240  Module 4 Third Party and Open Source Tool integration with Azure

allocation_method: Static
name: myPublicIP
register: output_ip_address
- name: Dump public IP for VM which will be created
debug:
msg: "The public IP is {{ output_ip_address.state.ip_address }}."
- name: Create Network Security Group that allows SSH
ITED

azure_rm_securitygroup:
resource_group: myResourceGroup
name: myNetworkSecurityGroup
rules:
- name: SSH
protocol: Tcp
destination_port_range: 22
USE PRUSE

access: Allow
priority: 1001
direction: Inbound
- name: Create virtual network interface card
azure_rm_networkinterface:
resource_group: myResourceGroup
STUDENT

name: myNIC
virtual_network: myVnet
subnet: mySubnet
public_ip_name: myPublicIP
security_group: myNetworkSecurityGroup
- name: Create VM
azure_rm_virtualmachine:
STUDENT

resource_group: myResourceGroup
name: myVM
vm_size: Standard_DS1_v2
admin_username: azureuser
ssh_password_enabled: true
admin_password: Password0134
network_interfaces: myNIC
ONLY.

image:
offer: CentOS
publisher: OpenLogic
sku: '7.5'
version: latest
ONLY.

1. Save the file locally, and name it .


2. Right-click on the file name in the tab at the top of Visual Studio Code, and review the available
USE

options available to run the Ansible playbook:


●● Run Ansible Playbook in Docker
●● Run Ansible Playbook in Local Ansible
T USE

●● Run Ansible Playbook Cloud Shell


●● Run Ansible Playbook Remotely via ssh
MCT

3. Select the third option, .


MC
MCT USE ONLY. STUDENT USE PROHIBITED
Ansible with Azure  241

4.
5. A notice might appear in the bottom, left side, informing you that the action could incur a small
charge as it will use some storage when the playbook is uploaded to cloud shell. Select Confirm &
.

6.
7. Verify that the Azure Cloud Shell pane now displays in the bottom of Visual Studio Code and is
running the playbook.
MCT USE ONLY. STUDENT USE PROHIBITED 242  Module 4 Third Party and Open Source Tool integration with Azure

8.
9. When the playbook finishes running, open Azure and verify the resource group, resources, and VM
have all been created. If you have time, sign in with the user name and password specified in the
playbook to verify as well.

: If you want to use a public or private key pair to connect to the Linux VM, instead of a user name
and password you could use the following code in the previous Create VM module steps:
admin_username: adminUser
ssh_password_enabled: false
ssh_public_keys:
- path: /home/adminUser/.ssh/authorized_keys
key_data: < insert your ssh public key here... >
Cloud-init with Azure  243

Lesson Overview
This lesson includes the following topics:
●● What is cloud-init
●● Cloud-init components
●● Cloud-init on Azure
●● Configure a Linux VM using cloud-init and Azure Cloud Shell

What is Cloud-init
Cloud-init is a widely used approach to customize a Linux VM as it boots for the first time. You can use
cloud-init to install packages, write files, and configure users and security.
Because cloud-init is called during the initial boot process, there are no additional steps or required
agents to apply your configuration. In addition, as the configuration is performed on initial boot, it
configures the VMs quickly and early.
Cloud-init also works across Linux distributions. For example, you don't need to use apt-get install
or yum install to install a package. Instead, you define a list of packages to install, and cloud-init
automatically uses the native package management tool for the distribution you select.

Cloud-init components
Cloud-init is run on Azure by using a configuration definition file, known as cloud-config. This file is in
the form of a .txt file and uses the .yml file structure.
The .txt cloud-config file is applied using the Azure Command-Line Interface (Azure CLI) command
using the –custom-data parameter, and then specifying the .txt cloud-config file.
For example, you would create a file named cloud-init.txt and place the following configuration
details into it:
#cloud-config
package_upgrade: true
packages:
- httpd

We can then run this configuration file by running the Azure CLI command as follows, specifying the
--custom-data switch and the .txt file name:
az vm create \
--resource-group myResourceGroup \
--name centos74 \
--image OpenLogic:CentOS:7-CI:latest \
--custom-data cloud-init.txt \
--generate-ssh-keys
OHIBPROHIBITED 244  Module 4 Third Party and Open Source Tool integration with Azure

For more information on how to format cloud-config files, see the Cloud Config Data page, which is
part of the clout-init documentation site.

Cloud-init on Azure
If cloud-init is already installed in the Linux image, you need not do anything else to use cloud-init
because it works as soon as it is installed.
ITED

Microsoft is actively working with endorsed Linux distribution partners to have cloud-init enabled images
available in the Azure marketplace. These images will make cloud-init deployments and configurations
work seamlessly with VMs and VM Scale Sets.
The following table outlines the current cloud-init enabled images available on the Azure platform.

Publisher Offer SKU Version cloud-init ready


USE PRUSE

Canonical Ubuntu Server 18.04-LTS latest yes


Canonical Ubuntu Server 17.10 latest yes
Canonical Ubuntu Server 16.04-LTS latest yes
Canonical Ubuntu Server 14.04.5-LTS latest yes
CoreOS CoreOS Stable latest yes
OpenLogic CentOS 7-CI latest yes
STUDENT

RedHat RHEL 7-RAW-CI latest yes


: Currently, Azure Stack does not support provisioning of Red Hat Enterprise Linux 7.4 and CentOS
7.4 using cloud-init.
STUDENT

Deploying a cloud-init enabled VM is as simple as referencing a cloud-init enabled distribution during


deployment. Linux distribution maintainers have to choose to enable and integrate cloud-init into their
base Azure published images. After you've confirmed the image you want to deploy is cloud-init-ena-
bled, you can use the Azure CLI to deploy the image.
ONLY.

With cloud-init, you don't need to convert your existing scripts into a cloud-config file; cloud-init accepts
multiple input types, one of which is a Bash script. If you've been using the Linux Custom Script Azure
Extension to run your scripts, you can migrate them to use cloud-init. However, while Azure extensions
have integrated reporting to alert you if scripts fail, a cloud-init image deployment will not fail if the script
fails.
ONLY.

What is the difference between cloud-init and the Win


USE

Windows Azure Linux Agent (WALinuxAgent) is an Azure platform-specific agent that you use to provi-
sion and configure VMs, and manage Azure extensions.To allow existing cloud-init customers to use their
T USE

current cloud-init scripts, Microsoft is enhancing the task of configuring VMs to use cloud-init instead of
the Linux Agent. If you have existing investments in cloud-init scripts for configuring Linux systems, there
are no additional settings required to enable them.
MCT

https://cloudinit.readthedocs.io/en/latest/topics/format.html#cloud-config-data
MC
Cloud-init with Azure  245

If you don't include the Azure CLI switch at provisioning time, WALinuxAgent takes the
minimal VM provisioning parameters required to provision the VM and complete the deployment with
default settings. If you do reference the cloud-init switch, whatever is contained in your
custom data (individual settings or full script) overrides the WALinuxAgent defaults.

Walkthrough-Configure a Linux VM using


Cloud-init and Azure Cloud Shell
In this walkthrough we will create a Linux VM using Azure CLI, and configure it on boot using cloud-init,

●● You require an Azure subscription to perform these steps. If you don't have one you can create one by
following the steps outlined on the webpage.

1. Go to the Azure Cloud Shell at https://shall.azure.com, or launch Azure Cloud Shell from within the
Azure portal by selecting the PowerShell icon in the top, left corner.
2. Authenticate to Azure, and enter your credentials if prompted.
3. On the left side of the Azure portal taskbar, ensure is selected as the shell.
4. Create a new file using the following command:
vi cloud-init.txt

5. Enter insert mode by selecting the key.


6. Copy and paste the following code into the file:
#cloud-config
package_upgrade: true
packages:
- httpd

7. Exit insert mode by selecting the key.


8. Save the file, and exit the editor by entering the following command:
:wq

9. Before deploying a VM and using cloud-init to configure it, you first need to create a resource group
in Azure in which to deploy the VM by running the following command:
az group create --name cloud-init-rg1 --location < your nearest datacenter
>

10. After the command completes, verify that the resource group has been created successfully.

https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstu-
dio
MCT USE ONLY. STUDENT USE PROHIBITED 246  Module 4 Third Party and Open Source Tool integration with Azure

11. Next, run the cloud-init configuration file by running the Azure CLI command as in the following code,
specifying the switch and the file name:
az vm create \
--resource-group cloud-init-rg1 \
--name centos74 \
--image OpenLogic:CentOS:7-CI:latest \
--custom-data cloud-init.txt \
--generate-ssh-keys

12. After the configuration file finishes, open the Azure portal and verify the VM has been created.
13. Open the deployed VM, and then select .
14. From the pane, copy the details from the
.

15.
16. Return to the Azure Cloud Shell and sign into the VM with the credentials you obtained. Verify that
you can connect to the VM by running the following commands to verify the machine status:
cloud-init status

17. Verify the VM status returns as status: running.


MCT USE ONLY. STUDENT USE PROHIBITED
Cloud-init with Azure  247

18.
19. Now run the following command to view the package installation history:
sudo yum history

20. Verify that the package history displays httpd, as was specified in the cloud-init .txt configuration file.
OHIBPROHIBITED 248  Module 4 Third Party and Open Source Tool integration with Azure

Lesson Overview
This lesson includes the following topics:
●● What is Terraform
ITED

●● Terraform components
●● Terraform on Azure
●● Installing Terraform
●● Terraform config file structure
●● Run Terraform in Azure Cloud Shell
USE PRUSE

●● Run Terraform in Visual Studio Code

What is Terraform
HashiCorp Terraform is an open-source tool that allows you to provision, manage, and version cloud
infrastructure. It codifies infrastructure in configuration files that describes the topology of cloud resourc-
STUDENT

es such as VMs, storage accounts, and networking interfaces.


STUDENT

Terraform's command-line interface (CLI) provides a simple mechanism to deploy and version the
configuration files to Azure or any other supported cloud service. The CLI also allows you to validate and
preview infrastructure changes before you deploy them.
Terraform also supports multi-cloud scenarios. This means it enables developers to use the same tools
and configuration files to manage infrastructure on multiple cloud providers.
You can run Terraform interactively from the CLI with individual commands, or non-interactively as part of
ONLY.

a continuous integration pipeline.


There is also an enterprise version of Terraform available, .
: You can view more details about Terraform on the website.

Terraform components
ONLY.

Some of Terraform's core components include:


USE

●● Configuration files. Text-based configuration files allow you to define infrastructure and application
configuration, and end in the or extension. The files can be in either of the following two
formats:
T USE

●● Terraform. The Terraform format is more user-readable, supports comments, and is the generally
recommended format for most Terraform files. Terraform files ends in
MCT

https://www.terraform.io/
MC
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform with Azure  249

●● JSON. The JSON format is meant more for machines to create, modify, and update. However, it can
also be used by Terraform operators if you prefer. JSON files end in
The order of items (such as variables resources) within the configuration doesn't matter; Terraform
configurations are declarative, so references to other resources and variables don't depend on the
order in which they're defined.
●● Terraform CLI. Terraform CLI is a command-line interface from which you run configurations. You can
run command such as and , along with many others. A CLI configu-
ration file that configures per-user setting for the CLI is also available. However, this is separate from
the CLI infrastructure configuration. In Windows operating system environments, the configuration file
is named , and is stored in the relevant user's %APPDATA% directory. On Linux systems,
the file is named (note the leading period), and is stored in the home directory of the
relevant user.
●● Modules. Modules in Terraform are self-contained packages of Terraform configurations that are
managed as a group. You use modules to create reusable components in Terraform and for basic code
organization. A list of available modules for Azure is available on the
page.
●● Provider. The provider is responsible for understanding API interactions and exposing resources.
●● Overrides. Overrides are a way to create configuration files that are loaded last and merged into(rath-
er than appended to) your configuration. You can create overrides to modify Terraform behavior
without having to edit the Terraform configuration. They can also be used as temporary modifications
that you can make to Terraform configurations without having to modify the configuration itself.
●● Resources. Resources are sections of a configuration file that define components of your infrastructure,
such as VMs, network resources, containers, dependencies, or DNS records. The resource block
creates a resource of the given TYPE (first parameter) and NAME (second parameter). However, the
combination of the type and name must be unique. Within the braces is the resource's configuration.
●● Execution plan. You can issue a command in the Terraform CLI to generate an execution plan. The
execution plan shows what Terraform will do when a configuration is applied. This enables you to
verify changes and flag of potential issues. The command for the execution plan is .
●● Resource graph. Using a resource graph, you can build a dependency graph of all resources, and can
then create and modify resources in parallel. This helps you increase efficiency when provisioning and
configuring resources.

Terraform on Azure
You download Terraform for use in Azure via: Azure Marketplace, Terraform Marketplace, or Azure VMs.

Azure Marketplace offers a fully-configured Linux image containing Terraform with the following charac-
teristics:
●● The deployment template will install Terraform on a Linux (Ubuntu 16.04 LTS) VM along with tools
configured to work with Azure. Items downloaded include:
●● Terraform (latest)
●● Azure CLI 2.0

https://registry.terraform.io/browse?provider=azurerm
MCT USE ONLY. STUDENT USE PROHIBITED 250  Module 4 Third Party and Open Source Tool integration with Azure

●● Managed Service Identity (MSI) VM extension


●● Unzip
●● Jq
●● apt-transport-https
●● This image also configures a remote back-end to enable remote state management using Terraform.

The Terraform Marketplace image makes it easy to get started using Terraform on Azure, without having
to install and configure Terraform manually. There are no software charges for this Terraform VM image.
You pay only the Azure hardware usage fees that are assessed based on the size of the VM that's provi-
sioned.

You can also deploy a Linux or Windows VM in Azure VM's IaaS service, install Terraform and the relevant
components, and then use that image.

Installing Terraform
To get started, you must install Terraform on the machine from which you are running the Terraform
commands.
Terraform can be installed on Windows, Linux or macOS environments. Go to the
page, and choose the appropriate download package for your environment.

If you download Terraform for the Windows operating system:


1. Find the install package, which is bundled as a zip file.
2. Copy files from the zip to a local directory such as C:\terraform. That is the Terraform PATH, so make
sure that the Terraform binary is available on the PATH.
3. To set the PATH environment variable, run the command , or point
to wherever you have placed the Terraform executable.
4. Open an administrator command window at C:\Terraform and run the command to
verify the installation. You should be able to view the terraform help output.

https://www.terraform.io/downloads.html
Terraform with Azure  251

1. Download Terraform using the following command:


wget https://releases.hashicorp.com/terraform/0.xx.x/terraform_0.xx.x_li-
nux_amd64.zip

2. Install Unzip using the command:


sudo apt-get install unzip

3. Unzip and set the path using the command:


unzip terraform_0.11.1_linux_amd64.zip
sudo mv terraform /usr/local/bin/

4. Verify the installation by running the command . Verify that the Terraform help output
displays.
OHIBPROHIBITED
USE PRUSE
STUDENT
STUDENT ITED 252  Module 4 Third Party and Open Source Tool integration with Azure

Terraform supports a number of different methods for authenticating to Azure. You can use:
●● The Azure CLI
●● A Managed Service Identity (MSI)
●● A service principal and a client certificate
●● A service principal and a client secret
ONLY.

When running Terraform as part of a continuous integration pipeline, you can use either an Azure service
principal or MSI to authenticate.
To configure Terraform to use your Azure Active Directory (Azure AD) service principal, set the following
environment variables:
ONLY.

●● ARM_SUBSCRIPTION_ID
●● ARM_CLIENT_ID
USE

●● ARM_CLIENT_SECRET
●● ARM_TENANT_ID
●● ARM_ENVIRONMENT
T USE

These variables are then used by the Azure Terraform modules. You can also set the environment if you
are working with an Azure cloud other than an Azure public cloud.
MCT

Use the following sample shell script to set these variables:


MC
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform with Azure  253

#!/bin/sh
echo "Setting environment variables for Terraform"
export ARM_SUBSCRIPTION_ID=your_subscription_id
export ARM_CLIENT_ID=your_appId
export ARM_CLIENT_SECRET=your_password
export ARM_TENANT_ID=your_tenant_id

# Not needed for public, required for usgovernment, german, china


export ARM_ENVIRONMENT=public

: After you install Terraform, before you can apply config .tf files you must run the following com-
mand to initialize Terraform for the installed instance:
Terraform init

Terraform config file structure


Take a moment to skim through the below sample terraform file and try to identify the different
elements within it. The file performs the below following on Azure:
●● Authenticates
●● Creates a resource group
●● Creates a virtual network
●● Creates a subnet
●● Creates a public IP address
●● Creates a network security group and rule
●● Creates a virtual network interface card
●● Creates a random text for a unique storage account name
●● Creates a storage account for diagnostics
●● Creates a virtual machine

Sample Terraform .tf file


# Configure the Microsoft Azure Provider
provider "azurerm" {
subscription_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
client_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
client_secret = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
tenant_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}

# Create a resource group if it doesn’t exist


resource "azurerm_resource_group" "myterraformgroup" {
name = "myResourceGroup"
location = "eastus"

tags {
MCT USE ONLY. STUDENT USE PROHIBITED 254  Module 4 Third Party and Open Source Tool integration with Azure

environment = "Terraform Demo"


}
}

# Create virtual network


resource "azurerm_virtual_network" "myterraformnetwork" {
name = "myVnet"
address_space = ["10.0.0.0/16"]
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"

tags {
environment = "Terraform Demo"
}
}

# Create subnet
resource "azurerm_subnet" "myterraformsubnet" {
name = "mySubnet"
resource_group_name = "${azurerm_resource_group.myterraformgroup.
name}"
virtual_network_name = "${azurerm_virtual_network.myterraformnetwork.
name}"
address_prefix = "10.0.1.0/24"
}

# Create public IPs


resource "azurerm_public_ip" "myterraformpublicip" {
name = "myPublicIP"
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraform-
group.name}"
public_ip_address_allocation = "dynamic"

tags {
environment = "Terraform Demo"
}
}

# Create Network Security Group and rule


resource "azurerm_network_security_group" "myterraformnsg" {
name = "myNetworkSecurityGroup"
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"

security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform with Azure  255

source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}

tags {
environment = "Terraform Demo"
}
}

# Create network interface


resource "azurerm_network_interface" "myterraformnic" {
name = "myNIC"
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.
name}"
network_security_group_id = "${azurerm_network_security_group.myterra-
formnsg.id}"

ip_configuration {
name = "myNicConfiguration"
subnet_id = "${azurerm_subnet.myterraformsub-
net.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.myterraform-
publicip.id}"
}

tags {
environment = "Terraform Demo"
}
}

# Generate random text for a unique storage account name


resource "random_id" "randomId" {
keepers = {
# Generate a new ID only when a new resource group is defined
resource_group = "${azurerm_resource_group.myterraformgroup.name}"
}

byte_length = 8
}

# Create storage account for boot diagnostics


resource "azurerm_storage_account" "mystorageaccount" {
name = "diag${random_id.randomId.hex}"
resource_group_name = "${azurerm_resource_group.myterraform-
group.name}"
location = "eastus"
account_tier = "Standard"
MCT USE ONLY. STUDENT USE PROHIBITED 256  Module 4 Third Party and Open Source Tool integration with Azure

account_replication_type = "LRS"

tags {
environment = "Terraform Demo"
}
}

# Create virtual machine


resource "azurerm_virtual_machine" "myterraformvm" {
name = "myVM"
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.
name}"
network_interface_ids = ["${azurerm_network_interface.myterraformnic.
id}"]
vm_size = "Standard_DS1_v2"

storage_os_disk {
name = "myOsDisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}

storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04.0-LTS"
version = "latest"
}

os_profile {
computer_name = "myvm"
admin_username = "azureuser"
}

os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/azureuser/.ssh/authorized_keys"
key_data = "ssh-rsa AAAAB3Nz{snip}hwhqT9h"
}
}

boot_diagnostics {
enabled = "true"
storage_uri = "${azurerm_storage_account.mystorageaccount.primary_
blob_endpoint}"
}

tags {
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform with Azure  257

environment = "Terraform Demo"


}
}

Walkthrough-Run Terraform in Azure Cloud


Shell
Terraform is pre-installed in Azure Cloud Shell so you can use it immediately with no additional configu-
ration required. Because you can install Terraform on both the Windows and Linux operating systems,
you can use either the PowerShell or Bash shells to run it. In this walkthrough you will create a resource
group in Azure using Terraform in Azure Cloud Shell with Bash
Here's an example of Azure Cloud Shell with PowerShell, running Terraform.

Here's an example of Azure Cloud Shell with Bash shell, running Terraform.
MCT USE ONLY. STUDENT USE PROHIBITED 258  Module 4 Third Party and Open Source Tool integration with Azure

You can also use the Azure Cloud Shell editor to view, open, and edit your .tf files. To open the editor,
select the braces in the taskbar at the top of Azure Cloud Shell window.
Terraform with Azure  259

●● You require an Azure subscription to perform these steps. If you don't have one you can create one by
following the steps outlined on the webpage.

The following steps outline how to create a resource group in Azure using Terraform in Azure Cloud Shell
with bash:
1. Open the Azure Cloud Shell at https://shell.azure.com, or launch Azure Cloud Shell from within the
Azure portal by selecting the Azure PowerShell icon.
2. Authenticate to Azure by entering your credentials, if prompted.
3. In the taskbar, ensure is selected.
4. Create a new file using the following code:
vi terraform-createrg.tf

5. Enter mode by selecting the key.


6. Copy and paste the following code into the file:

https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstu-
dio
OHIBPROHIBITED 260  Module 4 Third Party and Open Source Tool integration with Azure

provider "azurerm" {
}
resource "azurerm_resource_group" "rg" {
name = "testResourceGroup"
location = "westus"
}
ITED

7. Exit mode by selecting the key.


8. Save the file and exit the editor by entering the following command:
:wq

9. Use the following command to initialize Terraform:


USE PRUSE

terraform init

10. You should receive a message saying Terraform was successfully initiated.
ONLY.
ONLY. STUDENT
STUDENT

11.
USE

12. Run the configuration .tf file by running the following command:
terraform apply
T USE

13. You should receive a prompt saying a plan has been generated. Details of the changes should be
listed, followed by a prompt asking if you wish to apply the changes or cancel these actions.
MCT
MC
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform with Azure  261

14.
15. Enter a value of and select Enter. The command should run successfully, with output similar to the
following screenshot.

16.
17. Open Azure portal and verify the resource group is now present in the portal.

Walkthrough-Run Terraform in Visual Studio


Code
You can also run Terraform configuration files using Visual Studio Code. This leverages other Terraform
services that you can integrate with Visual Studio Code. Two Visual Studio extensions that are required,
are Azure Account, and Terraform.
In this walkthrough you will create a VM in Visual Studio Code using Terraform

●● You require Visual Studio Code


MCT USE ONLY. STUDENT USE PROHIBITED 262  Module 4 Third Party and Open Source Tool integration with Azure

●● You require an Azure subscription to perform these steps. If you don't have one you can create one by
following the steps outlined on the webpage.

1. If Visual Studio Code is not already installed, you will need to install it. You can download it from the
website, and can install it on Windows, Linux, or macOS.
2. In Visual Studio Code, select > > .
3. Search for and install the extension .

4.
5. Search for and install the extension . Ensure that you select the extension authored by
Microsoft, as there are a few available by other authors.

https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstu-
dio
https://code.visualstudio.com/
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform with Azure  263

6.
7. You can view more details of this extension at the Visual Studio Marketplace on the
page.
8. In Visual Studio Code, open the command palette by selecting > . You can
also access the command palette by selecting the (cog) icon on the bottom, left side of the
window, and then selecting .

9.
10. In the Command Palette search field, type , and from the results, select Azure: Sign In.

11.
12. When a browser launches and prompts you to sign in to Azure, select your Azure account. The
message You are signed in now and can close this page., should display in the browser.

https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureterraform
USEPROHIBITED
PROHIBITE
264  Module 4 Third Party and Open Source Tool integration with Azure

13.
14. Verify that your Azure account now displays at the bottom of the Visual Studio Code window.
15. Create a new file and paste it in the following config file:
STUDENTUSE

# Create a resource group if it doesn’t exist


resource "azurerm_resource_group" "myterraformgroup" {
name = "terraform-rg2"
location = "eastus"
ONLY.STUDENT

tags {
environment = "Terraform Demo"
}
}

# Create virtual network


resource "azurerm_virtual_network" "myterraformnetwork" {
name = "myVnet"
address_space = ["10.0.0.0/16"]
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"

tags {
environment = "Terraform Demo"
USE ONLY.

}
}

# Create subnet
resource "azurerm_subnet" "myterraformsubnet" {
name = "mySubnet"
resource_group_name = "${azurerm_resource_group.myterraformgroup.
name}"
virtual_network_name = "${azurerm_virtual_network.myterraformnetwork.
MCT USE

name}"
address_prefix = "10.0.1.0/24"
}

# Create public IPs


resource "azurerm_public_ip" "myterraformpublicip" {
name = "myPublicIP"
MCT

location = "eastus"
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform with Azure  265

resource_group_name = "${azurerm_resource_group.myterraform-
group.name}"
public_ip_address_allocation = "dynamic"

tags {
environment = "Terraform Demo"
}
}

# Create Network Security Group and rule


resource "azurerm_network_security_group" "myterraformnsg" {
name = "myNetworkSecurityGroup"
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"

security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}

tags {
environment = "Terraform Demo"
}
}

# Create network interface


resource "azurerm_network_interface" "myterraformnic" {
name = "myNIC"
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.
name}"
network_security_group_id = "${azurerm_network_security_group.myterra-
formnsg.id}"

ip_configuration {
name = "myNicConfiguration"
subnet_id = "${azurerm_subnet.myterraformsub-
net.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.myterraform-
publicip.id}"
}

tags {
MCT USE ONLY. STUDENT USE PROHIBITED 266  Module 4 Third Party and Open Source Tool integration with Azure

environment = "Terraform Demo"


}
}

# Generate random text for a unique storage account name


resource "random_id" "randomId" {
keepers = {
# Generate a new ID only when a new resource group is defined
resource_group = "${azurerm_resource_group.myterraformgroup.name}"
}

byte_length = 8
}

# Create storage account for boot diagnostics


resource "azurerm_storage_account" "mystorageaccount" {
name = "diag${random_id.randomId.hex}"
resource_group_name = "${azurerm_resource_group.myterraform-
group.name}"
location = "eastus"
account_tier = "Standard"
account_replication_type = "LRS"

tags {
environment = "Terraform Demo"
}
}

# Create virtual machine


resource "azurerm_virtual_machine" "myterraformvm" {
name = "myVM"
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.
name}"
network_interface_ids = ["${azurerm_network_interface.myterraformnic.
id}"]
vm_size = "Standard_DS1_v2"

storage_os_disk {
name = "myOsDisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}

storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04.0-LTS"
version = "latest"
}
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform with Azure  267

os_profile {
computer_name = "myvm"
admin_username = "azureuser"
admin_password = "Password0134!"
}

os_profile_linux_config {
disable_password_authentication = false
}
}

boot_diagnostics {
enabled = "true"
storage_uri = "${azurerm_storage_account.mystorageaccount.primary_
blob_endpoint}"
}

tags {
environment = "Terraform Demo"
}
}

16. Test .tf, then save it locally, and call it terraform-createvm.tf.


17. Save the file, and then select > . Search for the command , and
then select the following command:
Azure Terraform: apply

18.
19. If Azure cloud shell is not open in Visual Studio Code, a message might appear in the bottom, left
corner asking you if you want to open the cloud shell. Accept, and select .
20. Wait for the Azure Cloud Shell pane to appear in the bottom of Visual Studio Code window, and start
running the .tf file. When you are prompted to apply the plan or cancel, type , and then press
.
MCT USE ONLY. STUDENT USE PROHIBITED 268  Module 4 Third Party and Open Source Tool integration with Azure

21.
22. After the command completes successfully, review the list of resources created.

1. Open Azure and verify the resource group, resources, and VM has been created. If you have time, sign
in with the user name and password specified in the .tf config file to verify.
Terraform with Azure  269

: If you wanted to use a public or private key pair to connect to the Linux VM instead of a user name
and password, you could use the os_profile_linux_config module, set the
key value to and include the ssh key details, as in the following code.
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/azureuser/.ssh/authorized_keys"
key_data = "ssh-rsa AAAAB3Nz{snip}hwhqT9h"
}
}

You'd also need to remove the password value in the os_profile module that present in the example
above.
: You could also embed the Azure authentication within the script. In that case, you would not need
to install the Azure account extension, as in the following example:
provider "azurerm" {
subscription_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
client_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
client_secret = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
tenant_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
OHIBPROHIBITED 270  Module 4 Third Party and Open Source Tool integration with Azure

Labs Overview
ITED

To complete these practical exercises you will require the following:


●● a Microsoft Account
●● an Azure subscription
USE PRUSE

1. If you already have a Microsoft account that has not already been used to sign up for a free Azure trial
subscription, you’re ready to get started. If not, don’t worry, just
2. After you’ve created a Microsoft account, create your . You’ll need to
sign-in with your Microsoft account if you’re not already signed in. Then you’ll need to:
●● Enter your cellphone number and have Microsoft send you a text message to verify your identity.
STUDENT

●● Enter the code you have been sent to verify it.


●● Provide valid payment details. This is required for verification purposes only – your credit card
won’t be charged for any services you use during the trial period, and the account is automatically
deactivated at the end of the trial period unless you explicitly decide to keep it active. For more
information, see the on the Azure sign-up page.
STUDENT

Like many other cloud infrastructure platforms today, Azure is being continuously developing updates to
their services and components. If you've had your own subscriptions for any length of time, you are
already aware that changes to services happen much more rapidly than with more traditional application
deployment models.
Every effort will be made to update course content where there are major changes to product functionali-
ONLY.

ty. However, there will be occasions where course content does not exactly match the latest version of
the product. In most cases, you should still be able to understand the tasks and complete the course. The
general guidance from the Azure documentation teams is to check the documentation frequently to see
what upcoming notifications have been posted or where documentation has been updated to reflect the
latest changes.
ONLY.

We encourage you to consult updates as a starting point for the latest information about
updates. From there, you can avail yourself of blogs and other resources that are provided in order to
help you stay current in a cloud-enabled world.
USE
T USE

https://aka.ms/edx-devops200.4x-msa
MCT

https://aka.ms/edx-devops200.4x-az2
https://azure.microsoft.com/en-us/free/free-account-faq/
https://azure.microsoft.com/en-us/updates/
MC
Lab  271

AZ-400T05-M04-Lab Tasks

Module 4 - Third Party and Open Source Tool integration


with Azure
This course includes labs in which you can try out the techniques demonstrated in the course.
Steps for the labs are available on at the below sites under the sections
●●
●●
You should click on the links below, for the individual lab tasks for this module, and follow the steps
outlined there for each lab task.

●●
●●
●●

http://microsoft.github.io/PartsUnlimitedMRP/iac/200.2x-IaC-DeployappwithChefonAzure.html
http://microsoft.github.io/PartsUnlimitedMRP/iac/200.2x-IaC-DeployappwithPuppetonAzure.html
http://microsoft.github.io/PartsUnlimitedMRP/iac/200.2x-IaC-AnsiblewithAzure.html
OHIBPROHIBITED 272  Module 4 Third Party and Open Source Tool integration with Azure

Third Party and Open Source Tool Integration


with Azure review questions
ITED

Which of the following are main architectural components of Chef?


(choose all that apply)
†† Chef Server
†† Chef Facts
USE PRUSE

†† Chef Client
†† Chef Workstation
STUDENT

Which of the following are open-source products that are integrated into the Chef Automate image availa-
ble from Azure Marketplace?
†† Habitat
†† Facts
†† Console Services
STUDENT

†† InSpec

Which of the following are core components of the Puppet automation platform?
(chose all that apply)
ONLY.

†† Master
†† Agent
†† Facts
†† Habitat
ONLY.

Complete the following sentence. The main elements of a Puppet Program (PP) Manifest file are Class,
USE

Resource and...?
†† Module
T USE

†† Habitat
†† InSpec
MCT

†† Cookbooks
MC
Module Review Questions  273

Which of the following platforms use Agents to communicate with target machines?
(choose all that apply)
†† Puppet
†† Chef
†† Ansible

True or false: The Control Machine in Ansible must have Python installed?
†† True
†† False

Which of the following statements describes a common use for the cloud-init package?
†† cloud-init is used to apply custom configurations to a Linux VM, as it boots for the first time.
†† cloud-init is used to add support for multiple key types and algorithms.
†† cloud-init is used to manage access to Hardware Security Modules (HSM).
†† cloud-init is used to manage keys associated with an Azure Storage account.

Which of the following statements about the cloud-init package are correct?
†† The --custom-data parameter passes the name of the configuration file (.txt).
†† Configuration files (.txt) are encoded in base64.
†† The YML syntax is used within the configuration file (.txt).
†† cloud-init works across Linux distributions.

True or false: Terraform ONLY supports configuration files with the file extension .tf?
†† True
†† False
OHIBPROHIBITED 274  Module 4 Third Party and Open Source Tool integration with Azure

Which of the following core Terraform components can modify Terraform behavior, without having to edit
the Terraform configuration?
†† Configuration files
†† Overrides
ITED

†† Execution plan
†† Resource graph
T USE
MCT
MC ONLY.
USE STUDENT
ONLY. USE PRUSE
STUDENT
Module Review Questions  275

Which of the following are main architectural components of Chef?


(choose all that apply)
■■ Chef Server
†† Chef Facts
■■ Chef Client
■■ Chef Workstation
Explanation
The correct answers are Chef Server, Chef Client and Chef Workstation.
Chef Facts is an incorrect answer.
Chef Facts is not an architectural component of Chef. Chef Facts misrepresents the term 'Puppet Facts'.
Puppet Facts are metadata used to determine the state of resources managed by the Puppet automation
tool.
Chef has the following main architectural components. 'Chef Server' is the Chef management point. The
two options for the Chef Server are 'hosted' and 'on-premises'. 'Chef Client (node)' is an agent that sits on
the servers you are managing. 'Chef Workstation' is an Administrator workstation where you create Chef
policies and execute management commands. You run the Chef 'knife' command from the Chef Worksta-
tion to manage your infrastructure.

Which of the following are open-source products that are integrated into the Chef Automate image
available from Azure Marketplace?
■■ Habitat
†† Facts
†† Console Services
■■ InSpec
Explanation
The correct answers are Habitat and InSpec.
Facts and Console Services are incorrect answers.
Facts are metadata used to determine the state of resources managed by the Puppet automation tool.
Console Services is a web-based user interface for managing your system with the Puppet automation tool.
Habitat and InSpec are two open-source products that are integrated into the Chef Automate image
available from Azure Marketplace. Habitat makes the application and its automation the unit of deploy-
ment, by allowing you to create platform-independent build artifacts called 'habitats' for your applications.
InSpec allows you to define desired states for your applications and infrastructure. InSpec can conduct audits
to detect violations against your desired state definitions, and generate reports from its audit results.

Which of the following are core components of the Puppet automation platform?
STUDENT USE PROHIBITED 276  Module 4 Third Party and Open Source Tool integration with Azure

(chose all that apply)


■■ Master
■■ Agent
■■ Facts
†† Habitat
Explanation
The correct answers are Master, Agent and Facts.
Habitat is an incorrect answer.
Habitat is used with Chef for creating platform-independent build artifacts called for your applications.
Master, Agent and Facts are core components of the Puppet automation platform. Another core component
is 'Console Services'. Puppet Master acts as a center for Puppet activities and processes. Puppet Agent runs
on machines managed by Puppet, to facilitate management. Console Services is a toolset for managing and
configuring resources managed by Puppet. Facts are metadata used to determine the state of resources
managed by Puppet.

Complete the following sentence. The main elements of a Puppet Program (PP) Manifest file are Class,
Resource and...?
■■ Module
†† Habitat
†† InSpec
†† Cookbooks
Explanation
Module is the correct answer.
All other answers are incorrect answers.
Habitat, InSpec and Cookbooks are incorrect because they relate to the Chef automation platform.
The main elements of a Puppet Program (PP) Manifest file are Class, Resource and Module. Classes define
related resources according to their classification, to be reused when composing other workflows. Resources
are single elements of your configuration which you can specify parameters for. Modules are collection of all
the classes, resources and other elements in a single entity.
PR ONLY.

Which of the following platforms use Agents to communicate with target machines?
(choose all that apply)
■■ Puppet
ED

■■ Chef
NT USE OHIBT

†† Ansible
USE . Y ONL STUDEUSE

Explanation
The correct answers are: Puppet and Chef.
Ansible is an incorrect answer.
Ansible is agentless because you do not need to install an Agent on each of the target machines it manages.
Ansible uses the Secure Shell (SSH) protocol to communicate with target machines. You choose when to
conduct compliance checks and perform corrective actions, instead of using Agents and a Master to perform
MCT

these operations automatically.


MCT
Module Review Questions  277

Puppet and Chef use Agents to communicate with target machines. With Puppet and Chef, you install an
Agent on each target machine managed by the platform. Agents typically run as a background service and
facilitate communication with a Master, which runs on a server. The Master uses information provided by
Agents to conduct compliance checks and perform corrective actions automatically.

True or false: The Control Machine in Ansible must have Python installed?
■■ True
†† False
Explanation
True is the correct answer.
False in an incorrect answer.
A Control Machine in Ansible must have Python installed. Control Machine is one of the core components of
Ansible. Control Machine is for running configurations. The other core components of Ansible are Managed
Nodes, Playbooks, Modules, Inventory, Roles, Facts, and Plug-ins. Managed Nodes are resources managed
by Ansible. Playbooks are ordered lists of Ansible tasks. Modules are small blocks of code within a Playbook
that perform specific tasks. Inventory is list of managed nodes. Roles allow for the automatic and sequenced
loading of variables, files, tasks and handlers. Facts are data points about the remote system which Ansible
is managing. Plug-ins supplement Ansible's core functionality.

Which of the following statements describes a common use for the cloud-init package?
■■ cloud-init is used to apply custom configurations to a Linux VM, as it boots for the first time.
†† cloud-init is used to add support for multiple key types and algorithms.
†† cloud-init is used to manage access to Hardware Security Modules (HSM).
†† cloud-init is used to manage keys associated with an Azure Storage account.
Explanation
The correct answer is: cloud-init is used to apply custom configurations to a Linux VM, as it boots for the
first time.
All other answers are incorrect answers because they describe uses for Azure Key Vault.
Cloud-init is a package that is often used to add custom configurations to a Linux VM, as it boots for the
first time. Cloud-init works across Linux distributions. In Azure, you can add custom configurations to a
Linux VM with cloud-init and a configuration file (.txt). Any provisioning configuration information con-
tained in the specified configuration file (.txt) is applied to the new VM, when the VM is created.

Which of the following statements about the cloud-init package are correct?
■■ The --custom-data parameter passes the name of the configuration file (.txt).
■■ Configuration files (.txt) are encoded in base64.
■■ The YML syntax is used within the configuration file (.txt).
■■ cloud-init works across Linux distributions.
Explanation
All of the answers are correct answers.
In Azure, you can add custom configurations to a Linux VM with cloud-init by appending the --custom-data
parameter, and passing the name of a configuration file (.txt), to the az vm create command. The --cus-
tom-data parameter passes the name of the configuration file (.txt) as an argument to cloud-init. Then,
OHIBPROHIBITED 278  Module 4 Third Party and Open Source Tool integration with Azure

cloud-init applies Base64 encoding to the contents of the configuration file (.txt), and sends it along with any
provisioning configuration information that is contained within the configuration file (.txt). Any provisioning
configuration information contained in the specified configuration file (.txt) is applied to the new VM, when
the VM is created. The YML syntax is used within the configuration file (.txt) to define any provisioning
configuration information that needs to be applied to the VM.

True or false: Terraform ONLY supports configuration files with the file extension .tf?
ITED

†† True
■■ False
Explanation
False is the correct answer.
True is an incorrect answer because Terraform supports configuration files with the file extensions .tf and .tf.
USE PRUSE

json.
Terraform configuration files are text based configuration files that allow you to define infrastructure and
application configurations. Terraform uses the file extension .tf for Terraform format configuration files, and
the file extension .tf.json for Terraform JSON format configuration files. Terraform supports configuration
files in either .tf or .tf.json format. The Terraform .tf format is more human-readable, supports comments,
and is the generally recommended format for most Terraform files. The JSON format .tf.json is meant for
STUDENT

use by machines, but you can write your configuration files in JSON format if you prefer.

Which of the following core Terraform components can modify Terraform behavior, without having to
edit the Terraform configuration?
†† Configuration files
STUDENT

■■ Overrides
†† Execution plan
†† Resource graph
Explanation
Overrides is the correct answer.
All other answers are incorrect answers.
ONLY.

Configuration files, in .tf or .tf.json format, allow you to define your infrastructure and application configura-
tions with Terraform.
Execution plan defines what Terraform will do when a configuration is applied.
Resource graph builds a dependency graph of all Terraform managed resources.
Overrides modify Terraform behavior without having to edit the Terraform configuration. Overrides can also
ONLY.

be used to apply temporary modifications to Terraform configurations without having to modify the
configuration itself.
MCT
MC USE
T USE
Module 5 Compliance and Security

Lesson Overview
This lesson includes the following topics:
●● What is rugged DevOps?
●● Rugged DevOps pipeline
●● Software Composition Analysis (SCA)
●● WhiteSource integration with Azure DevOps pipeline
●● Fortify integration with Microsoft Azure DevOps pipeline
●● CheckMarx integration with Azure DevOps
●● Veracode integration with Azure DevOp
●● How to integrate SCA checks into pipelines
●● DevOps and pipeline security
●● Secure DevOps Kit for Azure

What is Rugged DevOps


Cloud app adoption is rising to support business productivity, but a lack of security infrastructure could
be inadvertently compromising data. In the
, (a bi-annual publication that Microsoft creates for customers,
partners and the industry), it finds that:
●● 79% of SaaS storage apps
●● 86% of SaaS collaboration apps
do encrypt data both at rest and in transit.

https://www.microsoft.com/en-us/security/operations/security-intelligence-report
MCT USE ONLY. STUDENT USE PROHIBITED 280  Module 5 Compliance and Security

and also
●● 4% of SaaS storage apps
●● 3% of SaaS collaboration apps
support all HTTP headers session protection.

Rugged DevOpS brings together the notions of DevOps and Security. DevOps is about working faster.
Security is about emphasizing thoroughness, which is typically done at the end of the cycle, resulting in
potentially generating unplanned work right at the end of the pipeline. Rugged DevOps is a set of
practices designed integrate DevOps and security, and to meet the goals of both more effectively.

The goal of a Rugged DevOps pipeline is to allow development teams to work fast without breaking their
project by introducing unwanted vulnerabilities.
: Rugged DevOps is also sometimes referred to as DevOpsSec. You might encounter both terms, but
they both are intending to mean the same concept.

Security has typically been on a slower cycle and involved traditional security methodologies such as:
●● Access control
●● Environment hardening
●● Perimeter protection
In the context of Rugged DevOps, security includes all of these elements and more. With Rugged Dev-
Ops, security is more about securing the pipeline. It is about determining where you can add security to
the elements that plug into your build and release pipeline. For example, it's about how and where you
can add security to you automation practices, production environments, and other pipeline elements
while attempt to gain the speed of DevOps.
Rugged DevOps includes bigger questions such as:
●● Is my pipeline consuming third-party components, and if so, are they secure?
●● Are there known vulnerabilities within any of the third-party software we use?
MCT USE ONLY. STUDENT USE PROHIBITED
Security and Compliance in the pipeline  281

●● How quickly can I detect vulnerabilities (time to detect)?


●● How quickly can I remediate identified vulnerabilities (time to remediate)?
Security practices need to be as good and quick at detecting potential security anomalies as other parts
of the DevOps pipeline, including infrastructure automation and code development.

Rugged DevOps pipeline


As previously stated, the goal of a Rugged DevOps pipeline is to enable development teams to work fast
without introducing unwanted vulnerabilities to their project.

There are two important areas in the pipeline that are part of Rugged DevOps and not other DevOps
pipelines , these are:
●● , and the approval process associated with it. In the workflow diagram there
are additional steps which account for how software packages are added to the pipeline, and the
approval process they need to go through before they are used. This is very early in the pipeline to try
identify any issues early in the cycle.
●● . There is an additional step for scanning the source code. This is to perform a security
scan and verify certain security vulnerabilities are not present in our application source code. This
occurs after the app is built and before release and pre-release testing, again to try identify security
vulnerabilities as early as possible.
OHIBPROHIBITED 282  Module 5 Compliance and Security

We will address these areas in the remainder of this lesson, the problems they represent, and how
solutions can be achieved.

Software Composition Analysis (SCA)


It is important that you understand two areas from the Rugged DevOps pipeline: package management,
and OSS components.
ITED

Just as teams uses version control as a single source of truth for source code, Rugged DevOps relies on a
package manager as the unique source of binary components. By using binary package management, a
development team can create a local cache of approved components, and make this a trusted feed for
the continuous integration (CI) pipeline.
USE PRUSE

In Azure DevOps, Azure Artifacts is an integral part of the component workflow, which you can use to
organize and share access to your packages. It allows you to:
●● Keep your artifacts organized. Share code easily by storing Apache Maven, npm, and NuGet packages
together. You can store packages using Universal Packages, eliminating the need to store binaries in
Git.
STUDENT

●● Protect your packages. Keep every public source package you use, including packages from npmjs
and nuget.org, safe in your feed where only you can delete it, and where it’s backed by the enter-
prise-grade Azure SLA.
●● Integrate seamless package handling into your CI/CD pipeline. Easily access all your artifacts in builds
and releases. Artifacts integrate natively with the Azure Pipelines CI/CD tool.
STUDENT

For more information on Azure Artifacts, visit


.
The following table lists available package types, and the Azure DevOps and Azure DevOps Server
(previously known as Team Foundation Server (TFS)) versions, where they are available.

Feature Azure DevOps Services TFS


NuGet Yes TFS 2017
ONLY.

npm Yes TFS 2017 update 1 and newer


Maven Yes TFS 2017 update 1 and newer
Gradle Yes TFS 2018
Universal Yes No
Python Yes No
ONLY.

Maven, npm, and NuGet packages are supported from public and private sources with teams of any size.
Azure Artifact comes with Azure DevOps, but the extension is also available from the
.
USE
T USE
MCT

https://docs.microsoft.com/en-us/azure/devops/artifacts/overview?view=vsts
https://marketplace.visualstudio.com/items?itemName=ms.feed
MC
Security and Compliance in the pipeline  283

: After you publish a particular version of a package to a feed, that version number is permanently
reserved. You cannot upload a newer revision package with that same version number, or delete that
version and upload a new package with the same version number. The published version is immutable.

Developers today are more productive than ever as a result of the wide availability of reusable open-
source software (OSS) components. This practical approach to reuse includes runtimes, which are availa-
ble on Windows and Linux operating systems, such as Microsoft .NET Core and Node.js.
At the same time, OSS reuse comes with the risk of the reused dependencies having security vulnerabili-
ties. As a result, many users find security vulnerabilities in their applications due to the Node.js package
versions they consume.
OSS offers a new concept, sometimes called Software Composition Analysis (SCA), which is depicted in the
following image.
PR STUDENT USE PROHIBITED 284  Module 5 Compliance and Security

When consuming an OSS component, whether you're creating or consuming dependencies, you'll
typically want to follow these high-level steps:
1. Start with the latest correct version to avoid any old vulnerabilities or license misuse.
2. Validate that the OSS components are in fact the correct binaries for your version. In the release
pipeline, validate binaries to ensure they’re correct and to keep a traceable bill of materials.
3. In the event of a vulnerability, be notified immediately, and be able to correct and redeploy the
IBITED

component automatically to prevent a security vulnerability or license misuse from reused software.

Whitesource integration with Azure DevOps


pipeline
DENT USE OH

is an important site for addressing Rugged DevOps issues. From here you
E ONLY. STUONLY.

can integrate specialist security products into your Azure DevOps pipeline. Having a full suite of exten-
sions that allow seamless integration into Azure DevOps pipelines is invaluable.

is one such example of an extension available on the Azure DevOps Marketplace. Using
WhiteSource, you can integrate extensions with your CI/CD pipeline to address Rugged DevOps securi-
ty-related issues. For a team consuming external packages, the WhiteSource extension specifically
MCTUSUSE

addresses the questions of open source security, quality, and license compliance. Because most breaches
today target known vulnerabilities in known components, this is essential hygiene for consuming open
source products.

https://marketplace.visualstudio.com/
MCT

https://marketplace.visualstudio.com/items?itemName=whitesource.whitesource
MCT USE ONLY. STUDENT USE OH
Security and Compliance in the pipeline  285

Continuously detect all open-source components in your


software
WhiteSource will automatically detect all open-source components—including their transitive dependen-
cies—every time you run a build. This means you can generate a comprehensive inventory report within
minutes based on the last build you ran. It also gives your security, DevOps, and legal teams full visibility
into your organization’s software development process.

PR IBITED
Receive alerts on open-source security vulnerabilities
When a new security vulnerability is discovered, WhiteSource automatically generates an alert and
provides targeted remediation guidance. This can include links to patches, fixes, relevant source files,
even recommendations to change system configuration to prevent exploitation.

Automatically enforce open-source security and license


compliance policies
According to a company’s policies, WhiteSource automatically approves, rejects, or triggers a manual
approval process every time a new open-source component is added to a build. Developers can set up
policies based on parameters such as security-vulnerability severity, license type, or library age. When a
developer attempts to add a problematic open source component, the service will send an alert and fail
the build.
MCT USE ONLY. STUDENT USE PROHIBITED 286  Module 5 Compliance and Security

For searching online repositories such as GitHub and Maven Central, WhiteSource also offers an innova-
tive browser extension. Even before choosing a new component, a developer can see its security vulnera-
bilities, quality, and license issues, and whether it fits their company’s policies.

Fortify integration with Azure DevOps pipeline


is another example of an extension you can leverage from the
and integrate with your CI/CD pipeline to address Rugged DevOps security-related issues.
allows you to add build tasks for continuous integration builds to help identify
vulnerabilities in your source code.
Micro Focus Fortify provides a comprehensive set of software security analyzers that search for violations
of security-specific coding rules and guidelines. Development groups and security professionals use it to
analyze application source code for security issues.

Micro Focus Fortify Static Code Analyzer (Fortify SCA) identifies root causes of software security vulnera-
bilities. It then delivers accurate, risk-ranked results with line-of-code remediation guidance.

https://marketplace.visualstudio.com/items?itemName=fortifyvsts.hpe-security-fortify-vsts
Security and Compliance in the pipeline  287

Fortify on Demand delivers application SaaS. It automatically tasks submit static and dynamic scan
requests to the application SaaS platform. Static assessments, projects upload to Fortify on Demand, and
for dynamic assessments, it uses the application’s preconfigured URL.

CheckMarx integration with Azure DevOps


is another example of an extension you can leverage from the
and integrate with your CI/CD pipeline to address Rugged DevOps security-related issues.
is designed for identifying, tracking, and fixing technical and logical security flaws.
Checkmarx is a powerful, unified security solution for Static Source Code Analysis (SAST) and Open
Source Analysis (OSA). You can download Checkmarx from the Azure DevOps Marketplace.

Checkmarx functionality includes:


●● Best fix location. Checkmarx highlights the best place to fix your code to minimize the time required
to remediate the issue. A visual chart of the data flow graph indicates the ideal location in the code to
address multiple vulnerabilities within the data flow using a single line of code.
●● Quick and accurate scanning. Checkmarx helps reduce false positives, adapt the rule set to minimize
false positives, and understand the root cause for results.
●● Incremental scanning. Using Checkmarx, you can test just the code parts that have changed since last
code check in. This helps reduce scanning time by more than 80 percent. It also enables you to
incorporate the security gate within your continuous integration pipeline.
●● Seamless integration. Checkmarx works with all integrated development environments (IDEs), build
management servers, bug tracking tools, and source repositories.
●● Code portfolio protection. Checkmarx helps protect your entire code portfolio, both open source and
in-house source code. It analyzes open-source libraries, ensuring licenses are being adhered to, and
removing any open-source components that expose the application to known vulnerabilities. In
addition, Checkmarx Open Source helps provide complete code portfolio coverage under a single
unified solution with no extra installations or administration required.
●● Easy to initiate Open Source Analysis. Enhancing your code portfolio risk assessment coverage is just a
few mouse clicks away. With Checkmarx’s Open Source analysis, you don't need additional installa-
tions or multiple management interfaces; you simply turn it on, and within minutes a detailed report is
generated with clear results and detailed mitigation instructions. Analysis results are designed with

https://marketplace.visualstudio.com/items?itemName=checkmarx.cxsast
MCT USE ONLY. STUDENT USE PROHIBITED 288  Module 5 Compliance and Security

the developer in mind. No time is wasted trying to understand the required action items to mitigate
detected security or compliance risks.

How to Integrate SCA checks into pipelines


Security scanning used to be viewed as an activity that was passed on to another team, with little involve-
ment from others, that was completed perhaps once per release. This practice creates a dangerous
pattern in which security specialists find large batches of issues at the exact time that developers are
under the most pressure to release. This results in products being released with security vulnerabilities
that are addressed only after their release. Rugged DevOps helps make all quality activity—including
security—continuous and automated, by integrating scanning into a team’s workflow at multiple points
along the development path.

Pull requests (PRs) are the way DevOps teams submit changes. Prior to a PR, a developer needs the ability
to see the effect of code changes to avoid introducing new issues. In a devops process, each PR is
typically small, and merges are continual enabling the master branch of code to stay fresh. Ideally, a
developer can check for security issues prior to a PR.
Azure Marketplace extensions that facilitate integrating scans during PRs include:
●● . Facilitates validating dependencies with its binary fingerprinting.
●● . Provides an incremental scan of changes.
●● . Has the concept of a developer sandbox.
●● . An auditing tool for open source code to help identify, fix, and manage
compliance.
These extensions allow a developer to experiment with changes prior to submitting them.

Build and release definition code scan analysis integration


Developers need to optimize Continuous Integration (CI) for speed to give their development team
immediate feedback of any build breaks. As scanning cab be fast enough the process can and should be
integrated into the CI build definition. A failed scan prevents a broken build, which enables developers to
fix the security issues immediately, restoring the build to green.
At the same time, continuous delivery (CD) needs to be thorough. In Azure DevOps, CD is typically
managed through release definitions (which progress the build output across environments), or via
additional build definitions. Build definitions can be scheduled (perhaps daily), or triggered with each
commit.
In either case, the build definition can perform a longer static analysis scan as outlined in the following
image. You can scan the full code project and review any errors or warnings offline without blocking the
CI flow.

https://www.whitesourcesoftware.com/
https://www.checkmarx.com/
https://www.veracode.com/
https://www.blackducksoftware.com/
Security and Compliance in the pipeline  289

Veracode integration with Azure DevOps


is another example of an extension that you can leverage from the
and integrate with your CI/CD pipeline to address Rugged DevOps security-related issues.
The Veracode Application Security Platform is an SaaS that enables developers to automatically scan an
application for security vulnerabilities. Veracode provides static application security testing (SAST),
dynamic application security testing (DAST), and SCA, allowing development teams to assess both
first-party code and third-party components for security risks.

https://marketplace.visualstudio.com/items?itemName=Veracode.veracode-vsts-build-extension
OHIBPROHIBITED
USE PRUSE ITED 290  Module 5 Compliance and Security

●● Integrate application security into the development tools you already use. From within Azure DevOps
and Microsoft Team Foundation Server (TFS) you can automatically scan code using the Veracode
Application Security Platform to find security vulnerabilities, import any security findings that violate
STUDENT

your security policy as work items, and even optionally stop the build if serious security issues are
discovered.
●● Don't stop for false alarms: Because Veracode gives you accurate results and prioritizes them based
on severity, you won’t need to waste resources responding to hundreds of false positives. Microsoft
has assessed over 2 trillion lines of code in 15 languages and over 70 frameworks, and the process
improves with every assessment as a result of the rapid update cycles and continuous improvement
STUDENT

processes. And, if something does get through, you can mitigate it using the easy Veracode workflow.
●● Align your application security practices with your development practices: Do you have a large or
distributed development team? Do you have too many revision control branches? You can integrate
your Azure DevOps workflows with the Veracode Developer Sandbox, which supports multiple
development branches, feature teams, and other parallel development practices.
●● Don't just find vulnerabilities, fix them: Veracode gives you remediation guidance with each finding
and the data path that an attacker would use to reach the application's weak point. Veracode also
ONLY.

highlights the most common sources of vulnerabilities to help you prioritize remediation. In addition,
when vulnerability reports don’t provide enough clarity, you can set up one-on-one developer
consultations with Microsoft experts who have backgrounds in both security and software develop-
ment. Security issue's that are found by Vercode, which could prevent you from releasing, show up
automatically in your teams' list of work items, and are automatically updated and closed after you
scan your fixed code.
ONLY.

●● Proven onboarding process allows for scanning on day one. Want to get started quickly? The cloud-
based Veracode Application Security Platform is designed to get you going quickly and be easy to use
so that you can get started in minutes. Veracode's services and support team can make sure that you
USE

are on track to build application security into your process.


T USE
MCT
MC
MCT USE ONLY. STUDENT USE PROHIBITED
Security and Compliance in the pipeline  291

Ops and pipeline security


In addition to protecting your code, it’s essential to protect credentials and secrets. In particular, phishing
is becoming ever more sophisticated. There are several operational practices that a team ought to apply
to protect itself:
●● Authentication and Authorization. Use multi-factor authentication—even across internal domains and
just-in-time administration—such as the PowerShell , to protect
against escalation of privileges. Different passwords for different user accounts will limit the damage if
a set of credentials is stolen.
●● Use the CI/CD Release Pipeline. Use this pipeline to rebuild infrastructure should the release pipeline
and cadence also contain damage. If you manage Infrastructure as Code with Azure Resource Manag-
er or use the Azure platform as a service (PaaS) or a similar service, then your pipeline will automati-
cally create new instances and then destroy them, giving attackers no place to hide code inside your
infrastructure. Azure DevOps will encrypt the secrets in your pipeline, and as a best practice, rotate the
passwords just as you would other credentials.
●● Manage Permissions. You can manage permissions to secure the pipeline with role-based access
control (RBAC) just as you would for your source code. You want to control who can edit the build and
release definitions you use for production.
●● Dynamic Scanning. This is the process of testing the running application with known attack patterns.
You could implement penetration testing as part of your release, and keep up to date on security
projects such as Open Web Application Security Project ( ), then adopt these projects into
your processes.
●● Monitoring Production. This is a key DevOps practice. The specialized services for detecting anoma-
lies related to intrusion are known as Security Information and Event Management.
focuses on the security incidents related to the Azure cloud.
: In all cases, use Azure Resource Manager Templates, or other code-based configurations, and
Infrastructure as Code best practices, such as only making changes in templates, thereby making them
traceable and repeatable. Also, use provisioning and configuration technologies such as Desired State
Configuration, Azure Automation, and other third-party tools and products that integrate seamlessly with
Azure.

Secure DevOps kit for Azure (AzSK)


The (AzSK) is a collection of scripts, tools, extensions, automations, and
other items that cater to the end-to-end Azure subscription. It resources security needs for DevOps
teams, using extensive automation and security integration into native DevOps workflows.
The Secure DevOps Kit for Azure helps secure DevOps by:
●● . A secure cloud subscription provides a core foundation upon
which to conduct development and deployment activities. An engineering team can deploy and
configure security in the subscription by using alerts, ARM policies, RBAC, Security Center policies,
JEA, and Resource Locks. Likewise, it can verify that all settings are in conformance with a secure
baseline.

http://aka.ms/jea
https://www.owasp.org
https://azure.microsoft.com/en-us/services/security-center/
https://github.com/azsk/DevOpsKit-docs
MCT USE ONLY. STUDENT USE PROHIBITED 292  Module 5 Compliance and Security

●● . During the coding and early development stages, developers need to


write secure code and then test the secure configuration of their cloud applications. Similar to build
verification tests (BVTs), AzSK introduces the concept of security verification tests (SVTs), which can
check for security of various resource types in Azure.
●● . Test automation is a core tenet of DevOps. Secure DevOps provides
the ability to run SVTs as part of the Azure DevOps CI/CD pipeline. You can use these SVTs to ensure
that the target subscription used to deploy a cloud application and the Azure resources the applica-
tion is built upon are all set up in a secure manner.
●● . In the constantly changing DevOps environment, it's important to
move away from the mindset of security being a milestone. Security needs to be viewed as a continu-
ously varying state of a system. This is possible through Secure DevOps capabilities that enable
continuous assurance using a combination of tools such as automation runbooks and schedules.
●● Alerting & monitoring. Security status visibility is important for both individual application teams
and central enterprise teams. Secure DevOps provides solutions that cater to the needs of both.
Moreover, the solution spans across all stages of DevOps, in effect bridging the security gap between
the dev team and the ops team through the single, integrated views it generates.
●● . Underlying all activities in the Secure DevOps kit is a telemetry framework
that generates events such as capturing usage, adoption, and evaluation results. This enables you to
make measured improvements to security, targeting areas of high risk and maximum usage.
You can leverage and utilize the tools, scripts, templates, and best practice documentation available as
part of this AzSK.
Azure security and compliance tools and services  293

Lesson Overview
This lesson includes the following topics:
●● Azure Security Center
●● Azure Security Center usage scenarios
●● Azure Policy
●● Policies
●● Initiatives
●● Azure Key Vault
●● RBAC
●● Locks
●● Subscription governance
●● Azure Blueprints
●● Azure Advanced Threat Protection

Azure Security Center


Azure Security Center is a monitoring service that provides threat protection across all of your services
both in Azure, and on-premises. Security Center can:
●● Provide security recommendations based on your configurations, resources, and networks.
●● Monitor security settings across on-premises and cloud workloads, and automatically apply required
security to new services as they come online.
●● Continuously monitor all your services, and perform automatic security assessments to identify
potential vulnerabilities before they can be exploited.
●● Use Azure Machine Learning to detect and block malicious software from being installed on your VMs
and services. You can also define a list of allowed applications to ensure that only the apps you
validate are allowed to execute.
●● Analyze and identify potential inbound attacks, and help to investigate threats and any post-breach
activity that might have occurred.
●● Provide just-in-time (JIT) access control for ports, thereby reducing your attack surface by ensuring
the network only allows traffic that you require.
OHIBPROHIBITED 294  Module 5 Compliance and Security

Azure Security Center is part of the (CIS) recommendations.

Azure Security Center supports both Windows and Linux operating systems. It can also provide security
to features in both IaaS and PaaS scenarios.
Azure Security Center is available in two tiers:
ITED

●● Free. Available as part of your Azure subscription, this tier is limited to assessments and Azure
resources' recommendations only.
●● Standard. This tier provides a full suite of security-related services including continuous monitoring,
threat detection, JIT access control for ports, and more.
To access the full suite of Azure Security Center services you will need to upgrade to a Standard tier
USE PRUSE

subscription. You can access the 60-day free trial from within the Azure Security Center dashboard in the
Azure portal.
STUDENT
STUDENT
ONLY.

After the 60-day trial period is over, Azure Security Center is $15 per node per month. To upgrade a
subscription from the Free trial to the Standard tier, you must be assigned the role of Subscription Owner,
Subscription Contributor, or Security Admin.
ONLY.

: You can read more about Azure Security Center at .

Azure Security Center usage scenarios


USE

You can integrate Security Center into your workflows and use it in many ways. Here are two examples.
1. Use Security Center for an incident response.
T USE
MCT

https://www.cisecurity.org/cis-benchmarks/
https://azure.microsoft.com/en-us/services/security-center/
MC
MCT USE ONLY. STUDENT USE PROHIBITED
Azure security and compliance tools and services  295

2. Many organizations learn how to respond to security incidents only after suffering an attack. To
reduce costs and damage, it’s important to have an incident response plan in place before an attack
occurs. You can use Azure Security Center in different stages of an incident response.

3.
4. You can use Security Center during the detect, assess, and diagnose stages. Here are examples of how
Security Center can be useful during the three initial incident response stages:
●● Detect. Review the first indication of an event investigation.
Example: Use the Security Center dashboard to review the initial verification that a high-priority
security alert was raised.
●● Assess. Perform the initial assessment to obtain more information about the suspicious activity.
Example: Obtain more information about the security alert.
●● Diagnose. Conduct a technical investigation and identify containment, mitigation, and workaround
strategies.
Example: Follow the remediation steps described by Security Center in that particular security alert.
5. Use Security Center recommendations to enhance security.
6. You can reduce the chances of a significant security event by configuring a security policy, and then
implementing the recommendations provided by Azure Security Center.
7. A security policy defines the set of controls that are recommended for resources within that specified
subscription or resource group. In Security Center, you define policies according to your company's
security requirements.
8. Security Center analyzes the security state of your Azure resources. When Security Center identifies
potential security vulnerabilities, it creates recommendations based on the controls set in the security
policy. The recommendations guide you through the process of configuring the needed security
controls.
9. For example, if you have workloads that do not require the Azure SQL Database Transparent Data
Encryption (TDE) policy, turn off the policy at the subscription level and enable it only in the resources
groups where SQL Database TDE is required.
: You can read more about Azure Security Center at . More implementation
and scenario details are also available in the .

Azure Policy
Azure Policy is a service in Azure that you use to create, assign, and, manage policies. These policies
enforce different rules and effects over your resources, which ensures they stay compliant with your
corporate standards and service-level agreements (SLAs).

https://azure.microsoft.com/en-us/services/security-center/
https://docs.microsoft.com/en-us/azure/security-center/security-center-planning-and-operations-guide
MCT USE ONLY. STUDENT USE PROHIBITED 296  Module 5 Compliance and Security

Azure Policy provides enforcement by using policies and initiatives. It runs evaluations of your resources
and scans for those not compliant with the policies you have created. For example, you can have a policy
to allow only a certain stock keeping unit (SKU) size of VMs in your environment. After you implement
this policy, it will evaluate resources when you create new SKUs or update existing ones. It will also
evaluate your existing resources and configurations and automatically remediate those that are deemed
non-compliant, thus ensuring the integrity of the state of the resources.
Azure Policy comes with a number of built-in policy and initiative definitions that you can use. The
definitions fall under categories such as Storage, Networking, Compute, Security Center, and Monitoring.
Azure Policy can also integrate with Azure DevOps by applying any continuous integration and delivery
pipeline policies that apply to the pre-deployment and post-deployment of your applications.

An example of an Azure policy that you can integrate with your DevOps pipeline is the Check Gate task.
This provides security and compliance assessment with Azure policies on resources that belong to the
defined resource group and Azure subscription. This is available as a release pipeline Deploy task.
You can read more about these subjects at:
●●
●●

Policies
The journey of creating and implementing a policy in Azure Policy begins with creating a policy definition.
Every policy definition has conditions under which it is enforced, and an accompanying effect that takes
place if the conditions are met.
The process of applying a policy to your resources consist of the following steps:
1. Create a policy definition.
2. Assign a definition to a scope of resources.
3. View policy evaluation results.

Policy definition
A policy definition expresses what to evaluate and what action to take. For example, you could prevent
VMs from deploying if they are exposed to a public IP address. You also could prevent a particular hard
disk from being used when deploying VMs to control costs. Policies are defined in JSON. Here's an
example script of a policy that limits where resources are deployed:

https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-policy-check-gate?view=vsts
https://azure.microsoft.com/en-us/services/azure-policy/
MCT USE ONLY. STUDENT USE PROHIBITED
Azure security and compliance tools and services  297

{
"properties": {
"mode": "all",
"parameters": {
"allowedLocations": {
"type": "array",
"metadata": {
"description": "The list of locations that can be
specified when deploying resources",
"strongType": "location",
"displayName": "Allowed locations"
}
}
},
"displayName": "Allowed locations",
"description": "This policy enables you to restrict the locations
your organization can specify when deploying resources.",
"policyRule": {
"if": {
"not": {
"field": "location",
"in": "[parameters('allowedLocations')]"
}
},
"then": {
"effect": "deny"
}
}
}
}

The following list contains example policy definitions:


●● . This policy definition has a set of conditions (or rules) that deter-
mine whether a storage account that is being deployed is within a set of SKU sizes. Its effect is to deny
all storage accounts that do not adhere to the set of defined SKU sizes.
●● . This policy definition has a set of conditions to specify the resource types
that employees can deploy. Its effect is to deny all resources that are not part of this defined list.
●● . This policy enables you to restrict the locations that your organization can specify
when deploying resources. Its effect is used to enforce your geographic compliance requirements.
●● . This policy enables you to specify a set of VM SKUs that your organi-
zation can deploy.

To implement these policy definitions, whether custom or built in, you will need to assign them. A policy
assignment is a policy definition that has been assigned to take place within a specific scope. This scope
could range from a management group to a resource group. Policy assignments are inherited by all child
resources. This means that if a policy is applied to a resource group, it's applied to all the resources within
that resource group. However, you can exclude a subscope from the policy assignment.
MCT USE ONLY. STUDENT USE PROHIBITED 298  Module 5 Compliance and Security

You can assign policies via:


●● Azure Portal
●● Azure CLI
●● PowerShell

Resources that are non-compliant to a policy can be put into a compliant state
through Remediation. Remediation is accomplished by instructing the Azure Policy to run the
effect of the assigned policy on your existing resources.
: You can read more about Azure Policy on the webpage.

Initiatives
Initiatives work alongside policies in Azure Policy. An initiative definition is a set of policy definitions to
help track your compliance state for a larger goal. Even if you have a single policy, we recommend using
initiatives if you anticipate increasing the number of policies over time.
Like a policy assignment, an initiative assignment is an initiative definition assigned to a specific scope.
Initiative assignments reduce the need to make several initiative definitions for each scope. This scope
could also range from a management group to a resource group. You assign initiatives in the same way
you assign policies.

Initiative definitions
Initiative definitions simplify the process of managing and assigning policy definitions by grouping a set
of policies as one single item. For example, you could create an initiative named Enable Monitoring in
Azure Security Center, with a goal to monitor all the available security recommendations in your Azure
Security Center.
Under this initiative, you would have the following policy definitions:
●● . This policy definition is for monitoring
unencrypted SQL databases and servers.
●● . This policy definition is for monitoring servers that
do not satisfy the configured baseline.
●● . This policy definition is for monitoring
servers without an installed endpoint protection agent.

Like a policy assignment, an initiative assignment is an initiative definition assigned to a specific scope.
Initiative assignments reduce the need to make several initiative definitions for each scope. This scope
could also range from a management group to a resource group.
: You can read more about policy definition and structure at Azure Policy definition structure .

https://azure.microsoft.com/en-us/services/azure-policy/
https://docs.microsoft.com/en-us/azure/governance/policy/concepts/definition-structure
Azure security and compliance tools and services  299

Azure Key Vault


Azure Key Vault is a centralized cloud service for storing your applications' secrets. Key Vault helps you
control your applications' secrets by keeping them in a single, central location and by providing secure
access, permissions control, and access logging capabilities.

Some of Key Vaults key capabilities and usage scenarios include:


●● Secrets management. You can use Key Vault to securely store and tightly control access to tokens,
passwords, certificates, API keys, and other secrets.
●● Key management. You also can use Key Vault as a key management solution. Key Vault makes it easier
to create and control the encryption keys used to encrypt your data.
●● Certificate management. Key Vault lets you provision, manage, and deploy your public and private
Secure Sockets Layer (SSL)/TLS certificates for your Azure subscription and internally connected
resources more easily.
●● Store secrets backed by hardware security modules (HSMs). The secrets and keys can be protected
either by software, or by Federal Information Processing Standard (FIPS) 140-2 Level 2 validated
HSMs.

Key Vault benefits


The benefits of using Key Vault include:
●● Centralized application secrets. Centralizing storage for application secrets allows you to control their
distribution, and reduces the chances that secrets might be accidentally leaked.
●● Securely stored secrets and keys. Azure uses industry-standard algorithms, key lengths, and HSMs,
and access requires proper authentication and authorization.
●● Monitor access and use. Using Key Vault, you can monitor and control access to company secrets.
●● Simplified administration of application secrets. Key Vault makes it easier to enroll and renew certifi-
cates from public certificate authorities (CAs). You can also scale up and replicate content within
regions, and use standard certificate management tools.
●● Integrate with other Azure services. You can integrate Key Vault with storage accounts, container
registries, event hubs, and many more Azure services.
: You can read more about Key Vault services on the webpage.

https://azure.microsoft.com/en-us/services/key-vault/
OHIBPROHIBITED 300  Module 5 Compliance and Security

Role-Based Access Control (RBAC)


RBAC provides fine-grained access management for Azure resources, which enables you to grant users
only the rights they need to perform their jobs. RBAC is provided at no additional cost to all Azure
subscribers.
ITED

Examples of when you might use RBAC include when you want to:
●● Allow one user to manage VMs in a subscription, and another user to manage virtual networks.
●● Allow a database administrator (DBA) group to manage SQL Server databases in a subscription.
●● Allow a user to manage all resources in a resource group, such as VMs, websites, and subnets.
USE PRUSE

●● Allow an application to access all resources in a resource group.


To view access permissions, in a deployed virtual machine, open the blade in the
Azure portal. On this blade, you can view who has access to an area and their role as well as grant or
remove access.
ONLY.
ONLY. STUDENT
STUDENT

The following illustration is an example of the blade for a resource group and on
USE

the Roles tab which is displaying some of the available builtin roles.
T USE
MCT
MC
Azure security and compliance tools and services  301

A full list of available built-in roles can be found at available on the


page.
RBAC uses an allow model. This means that when you are assigned a role, RBAC allows you to perform
certain actions, such as read, write, or delete. Therefore, if one role assignment grants you read permis-
sions to a resource group, and a different role assignment grants you write permissions to the same
resource group, you will have both read and write permissions on that resource group.

When using RBAC, segregate duties within your team and grant only the amount of access to users that
they need to perform their jobs. Instead of giving everybody unrestricted permissions in your Azure
subscription or resources, allow only certain actions at a particular scope i.e. grant users the lowest
privilege level that they need to do their work.
: For more information about RBAC, visit .

Locks
Locks help you prevent accidental deletion or modification of your Azure resources. You can manage
these locks from within the Azure portal. To view, add, or delete locks, go to the section of any
resource's settings blade.

https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles
https://docs.microsoft.com/en-us/azure/role-based-access-control/overview
MCT USE ONLY. STUDENT USE PROHIBITED 302  Module 5 Compliance and Security

You may need to lock a subscription, resource group, or resource to prevent other users in your organiza-
tion from accidentally deleting or modifying critical resources. You can set the lock level to
or :
●● means authorized users can still read and modify a resource, but they can't delete the
resource.
●● means authorized users can read a resource, but they can't delete or update it. Applying
this lock is similar to restricting all authorized users to the permissions granted by the Reader role.
In the Azure portal, the locks are called *Delete
: You can read more about Locks at .

Subscription governance
When creating and managing subscriptions: Billing, Access Control and Subscription limits, there are three
main aspects to consider:
●● Billing. Billing reports can be generated by subscriptions. If, for example, you have multiple internal
departments and need to perform a chargeback, you can create subscriptions by department or
project.
●● Access Control. A subscription is a deployment boundary for Azure resources. Every subscription is
associated with an Azure Active Directory (Azure AD) tenant that provides administrators with the
ability to set up RBAC. When designing a subscription model, be sure to consider the deployment
boundary factor. Some customers have separate subscriptions for development and production, each
one (from a resource perspective) being completely isolated from the other, and managed using
RBAC.
●● Subscription Limits. Subscriptions are bound to some hard limitations. For example, the maximum
number of Azure ExpressRoute circuits per subscription is 10. You should take those limits into
consideration during the design phase. If there is a need to go over those limits in particular scenari-
os, then you might need additional subscriptions. If you hit a hard limit, there is no flexibility.

Also available to assist with managing subscriptions are management groups. Management groups
manage access, policies, and compliance across multiple Azure subscriptions. They allow you to order
your Azure resources hierarchically into collections, which provides a further level of classification above
the level of subscriptions.
In the graphic below, we can see how Azure access is divided up across different business functions, such
as HR marketing and IT and also per region. We could sub-divide this further and include subscriptions
for Dev and QA, as well as specific teams working on our pipeline such as the security team. We could
then track at a much more granular level our costs and resource usage, as well adding additional security
layers and segmenting our workloads. Tightly restricting access to the production subscriptions would
further enhance our security segmentation.

https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-lock-resources
MCT USE ONLY. STUDENT USE PROHIBITED
Azure security and compliance tools and services  303

You can manage your Azure subscriptions more effectively by using Azure Policy and Azure RBACs. These
provide distinct governance conditions that you can apply to each management group. The resources
and subscriptions you assign to a management group automatically inherit the conditions that you apply
to that management group.
: For more information about management groups and Azure, read the
page.

Azure Blueprints
Azure Blueprints enables cloud architects to define a repeatable set of Azure resources that implement
and adhere to an organization's standards, patterns, and requirements. Azure Blueprints helps develop-
ment teams rapidly build and deploy new environments with the knowledge that they're building within
organizational compliance, and with a set of built-in components that speed up development and
delivery.

https://docs.microsoft.com/en-us/azure/governance/management-groups/
MCT USE ONLY. STUDENT USE PROHIBITED 304  Module 5 Compliance and Security

Azure Blueprints is a declarative way to orchestrate deployment for various resource templates and other
artifacts, such as:
●● Role assignments
●● Policy assignments
●● Azure Resource Manager templates
●● Resource groups
The process of implementing Azure Blueprints consists of the following high-level steps:
1. Create an Azure Blueprints blueprint.
2. Assign the blueprint.
3. Track the blueprint assignments.
With Azure Blueprints, the relationship between the blueprint definition (what should be deployed) and
the blueprint assignment (what is deployed) is preserved. This connection supports improved deployment
tracking and auditing.
The blueprints in Azure Blueprints are different from Azure Resource Manager templates. When Azure
Resource Manager templates deploy resources, they have no active relationship with the deployed
resources (they exist in a local environment or in source control). By contrast, with Azure Blueprints, each
deployment is tied to an Azure Blueprints package. This means that the relationship with resources will be
maintained, even after deployment. In this way, maintaining relationships improves auditing and tracking
capabilities.

Adhering to security or compliance requirements, whether government, industry, or organization require-


ments, can be difficult and time consuming. To help you with auditing, traceability, and compliance with
your deployments, use Azure Blueprints artifacts and tools to expedite your path to compliance and
certification.
Azure Blueprints is also useful in Azure DevOps scenarios where blueprints are associated with specific
build artifacts and release pipelines, and can be tracked more rigorously.
: At the time of writing, Azure Blueprints is in preview and has not yet been released publicly.
You can learn more about Azure Blueprints at .

Azure Advanced Threat Protection (ATP)


Azure Advanced Threat Protection (Azure ATP) is a cloud-based security solution that identifies, detects,
and helps you investigate advanced threats, compromised identities, and malicious insider actions
directed at your organization. Azure ATP is capable of detecting known malicious attacks and techniques,
security issues, and risks against your network.

https://azure.microsoft.com/en-us/services/blueprints/
Azure security and compliance tools and services  305

Azure ATP consists of the following components:


●● Azure ATP portal. Azure ATP has its own portal, through which you can monitor and respond to
suspicious activity. The portal allows you to create your Azure ATP instance, and view the data re-
ceived from Azure ATP sensors.
●● You can also use the portal to monitor, manage, and investigate threats in your network environment.
You sign in to the Azure ATP portal at . However, note that you must
sign in with a user account that is assigned to an Azure AD security group that has access to the Azure
ATP portal.
●● Azure ATP sensor. Azure ATP sensors are installed directly on your domain controllers. The sensor
monitors domain controller traffic without requiring a dedicated server or configured port mirroring.
●● Azure ATP cloud service. Azure ATP cloud service runs on the Azure infrastructure, and is currently
deployed in the United States, Europe, and Asia. The cloud service is connected to Microsoft Intelli-
gent Security Graph.

https://portal.atp.azure.com
ONLY.
ONLY. STUDENT
STUDENT OHIBPROHIBITED
USE PRUSE ITED 306  Module 5 Compliance and Security

Azure ATP is available as part of the Enterprise Mobility + Security E5, and as a standalone license. You
can acquire a license directly from the page, or
USE

through the Cloud Solution Provider (CSP) licensing model. It is not available to purchase via the Azure
portal.
: For more information about Azure ATP, review the page.
T USE
MCT

https://www.microsoft.com/en-ie/cloud-platform/enterprise-mobility-security-pricing
https://azure.microsoft.com/en-us/features/azure-advanced-threat-protection/
MC
Lab  307

Labs Overview

To complete these practical exercises you will require the following:


●● a Microsoft Account
●● an Azure subscription

1. If you already have a Microsoft account that has not already been used to sign up for a free Azure trial
subscription, you’re ready to get started. If not, don’t worry, just
2. After you’ve created a Microsoft account, create your . You’ll need to
sign-in with your Microsoft account if you’re not already signed in. Then you’ll need to:
●● Enter your cellphone number and have Microsoft send you a text message to verify your identity.
●● Enter the code you have been sent to verify it.
●● Provide valid payment details. This is required for verification purposes only – your credit card
won’t be charged for any services you use during the trial period, and the account is automatically
deactivated at the end of the trial period unless you explicitly decide to keep it active. For more
information, see the on the Azure sign-up page.

Like many other cloud infrastructure platforms today, Azure is being continuously developing updates to
their services and components. If you've had your own subscriptions for any length of time, you are
already aware that changes to services happen much more rapidly than with more traditional application
deployment models.
Every effort will be made to update course content where there are major changes to product functionali-
ty. However, there will be occasions where course content does not exactly match the latest version of
the product. In most cases, you should still be able to understand the tasks and complete the course. The
general guidance from the Azure documentation teams is to check the documentation frequently to see
what upcoming notifications have been posted or where documentation has been updated to reflect the
latest changes.
We encourage you to consult updates as a starting point for the latest information about
updates. From there, you can avail yourself of blogs and other resources that are provided in order to
help you stay current in a cloud-enabled world.

https://aka.ms/edx-devops200.4x-msa
https://aka.ms/edx-devops200.4x-az2
https://azure.microsoft.com/en-us/free/free-account-faq/
https://azure.microsoft.com/en-us/updates/
OHIBPROHIBITED 308  Module 5 Compliance and Security

AZ-400T05-M05-Lab Tasks
ITED

Module 5 - Compliance and Security


This course includes labs in which you can try out the techniques demonstrated in the course.
USE PRUSE

Steps for the labs are available on at the below sites under the sections
●●
●●
You should click on the link below, for the lab tasks for this module, and follow the steps outlined there
for each lab task.
STUDENT

●●
USE
T USE
MCT STUDENT
ONLY.
ONLY.

http://microsoft.github.io/PartsUnlimited/iac/200.2x-IaC-SecurityandComplianceinpipeline.html
MC
Module Review Questions  309

Compliance and Security review questions

Rugged DevOps combines which two elements?


(Choose two)
†† DevOps
†† Cost management
†† Microservice Architecture
†† Security
†† Hackathons

Which term broadly defines what security means in Rugged DevOps?


†† Access control
†† Application server hardening
†† perimeter protection
†† Securing the pipeline

What component in Azure DevOps can you use to store, organize and share access to packages, and
integrate those packages them with your continuous integration and continuous delivery pipeline?
†† Test Plans
†† Azure Artifacts
†† Boards
†† Pipelines

Which of the following package types are available to use with Azure Artifacts?
(choose three)
†† NuGet
†† npm
†† PowerShell
†† Maven
OHIBPROHIBITED 310  Module 5 Compliance and Security

Which description from the list below best describes the term Software Composition Analysis?
†† Assessment of production hosting infrastructure just before deployment
†† Analyze build software to identify load capacity
†† Analyzing open source software (OSS) to identify potential security vulnerabilities and provide
ITED

validation that the software meets a defined criteria to use in your pipeline
†† Analyzing open source software after it has been deployed to production to identify security vulnera-
bilities
USE PRUSE

From where can extensions be sourced from, to be integrated into Azure DevOps CI/CD pipelines and help
provide security composition analysis??
†† Azure DevOps Marketplace
†† www.microsoft.com
†† Azure Security Center
STUDENT

†† TFVC git repos

Which products, from the below list, are available as extensions in Azure DevOps Marketplace, and can
provide either OSS or source code scanning as part of an Azure DevOps pipeline?
STUDENT

(choose all that apply)


†† Whitesource
†† CheckMarx
†† Micro Focus Fortify
†† Veracode
ONLY.

Which Azure service from the below list is a monitoring service that can provide threat protection and
security recommendations across all of your services both in Azure and on-premises?
ONLY.

†† Azure Policy
†† Azure Security Center
†† Azure Key vault
USE

†† Role-based access control


T USE
MCT
MC
Module Review Questions  311

Which Azure service should you use from the below list to monitor all unencrypted SQL databases in your
organization?
†† Azure Policy
†† Azure Security Center
†† Azure Key Vault
†† Azure Machine Learning

Which facility from the below list, allows you to prevent accidental deletion of resources in Azure?
†† Key Vault
†† Azure virtual machines
†† Azure Blueprints
†† Locks
OHIBPROHIBITED 312  Module 5 Compliance and Security

Rugged DevOps combines which two elements?


(Choose two)
■■ DevOps
ITED

†† Cost management
†† Microservice Architecture
■■ Security
†† Hackathons
USE PRUSE

Explanation
DevOps and Security are the correct answers.
All other answers are incorrect.
Rugged DevOps brings together the notions of DevOps and Security. DevOps is about working faster.
Security is about emphasizing thoroughness, which is typically done at the end of the cycle, resulting in
potentially generating unplanned work right at the end of the pipeline. Rugged DevOps is a set of practices
STUDENT

designed integrate DevOps and security, and to meet the goals of both more effectively.

Which term broadly defines what security means in Rugged DevOps?


†† Access control
†† Application server hardening
STUDENT

†† perimeter protection
■■ Securing the pipeline
Explanation
Securing the pipeline us the correct answer.
All other answers, while covering some elements of it security, and while being important in their own right,
do not cover what is meant by security in Rugged DevOps.
ONLY.

With Rugged DevOps, security is more about securing the pipeline, determining where you can add security
to the elements that plug into your build and release pipeline. For example, it's about how and where you
can add security to you automation practices, production environments, and other pipeline elements while
attempt to gain the speed of DevOps.
Rugged DevOps includes bigger questions such as:
Is my pipeline consuming third-party components, and if so, are they secure?
ONLY.

Are there known vulnerabilities within any of the third-party software we use?
How quickly can I detect vulnerabilities (time to detect)?
How quickly can I remediate identified vulnerabilities (time to remediate)?
MCT
MC USE
T USE
Module Review Questions  313

What component in Azure DevOps can you use to store, organize and share access to packages, and
integrate those packages them with your continuous integration and continuous delivery pipeline?
†† Test Plans
■■ Azure Artifacts
†† Boards
†† Pipelines
Explanation
Azure Artifacts is the correct answer. Azure Artifacts is an integral part
of the component workflow, which you can use to organize and share access to
your packages. It allows you to:

Keep your artifacts organized. Share code easily by storing Apache Maven, npm, and NuGet packages
together. You can store packages using Universal Packages, eliminating the need to store binaries in Git.
Protect your packages. Keep every public source package you use, including packages from npmjs and
nuget.org, safe in your feed where only you can delete it, and where it’s backed by the enterprise-grade
Azure SLA.
Integrate seamless package handling into your CI/CD pipeline. Easily access all your artifacts in builds and
releases. Artifacts integrate natively with the Azure Pipelines CI/CD tool.
All other answers are incorrect.

Which of the following package types are available to use with Azure Artifacts?
(choose three)
■■ NuGet
■■ npm
†† PowerShell
■■ Maven
Explanation
NuGet, npm and Maven are the correct answers. Powershell is not a package type and is incorrect.
Azure Artifacts allows the sharing of code easily by storing Apache Maven, npm, and NuGet packages
together. You can also store packages using Universal Packages, eliminating the need to store binaries in
Git.
OHIBPROHIBITED 314  Module 5 Compliance and Security

Which description from the list below best describes the term Software Composition Analysis?
†† Assessment of production hosting infrastructure just before deployment
†† Analyze build software to identify load capacity
■■ Analyzing open source software (OSS) to identify potential security vulnerabilities and provide
validation that the software meets a defined criteria to use in your pipeline
ITED

†† Analyzing open source software after it has been deployed to production to identify security vulnera-
bilities
Explanation
Analyzing open source software (OSS) to identify potential security vulnerabilities and provide validation
that the software meets a defined criteria to use in your pipeline is the correct answer.
All other answers are incorrect.
USE PRUSE

When consuming an OSS component, whether you're creating or consuming dependencies, you'll typically
want to follow these high-level steps:

From where can extensions be sourced from, to be integrated into Azure DevOps CI/CD pipelines and
help provide security composition analysis??
STUDENT

■■ Azure DevOps Marketplace


†† www.microsoft.com
†† Azure Security Center
†† TFVC git repos
STUDENT

Explanation
Azure DevOps Marketplace is the correct answer. All other answers are incorrect.
Azure DevOps Marketplace is an important site for addressing Rugged DevOps issues. From here you can
integrate specialist security products into your Azure DevOps pipeline. Having a full suite of extensions that
allow seamless integration into Azure DevOps pipelines is invaluable

Which products, from the below list, are available as extensions in Azure DevOps Marketplace, and can
ONLY.

provide either OSS or source code scanning as part of an Azure DevOps pipeline?
(choose all that apply)
■■ Whitesource
■■ CheckMarx
ONLY.

■■ Micro Focus Fortify


■■ Veracode
USE

Explanation
All answers are correct.
All of the listed products are available as extensions in Azure DevOps Marketplace, and can provide either
T USE

OSS or static source code scanning as part of the Azure devOps pipeline
MCT
MC
Module Review Questions  315

Which Azure service from the below list is a monitoring service that can provide threat protection and
security recommendations across all of your services both in Azure and on-premises?
†† Azure Policy
■■ Azure Security Center
†† Azure Key vault
†† Role-based access control
Explanation
Azure Security Center is the correct answer.All other answers are incorrect.
Azure Security Center is a monitoring service that provides threat protection across all of your services both
in Azure, and on-premises. Security Center can:

None of the other services provide a monitoring service that can provide threat protection and security
recommendations across all of your services both in Azure and on-premises

Which Azure service should you use from the below list to monitor all unencrypted SQL databases in your
organization?
■■ Azure Policy
†† Azure Security Center
†† Azure Key Vault
†† Azure Machine Learning
Explanation
Azure Policy is the correct answer. All other answers are incorrect.
Azure Policy is a service in Azure that you use to create, assign, and, manage policies. These policies enforce
different rules and effects over your resources, which ensures they stay compliant with your corporate stand-
ards and service-level agreements (SLAs). A policy definition expresses what to evaluate and what action to
take. For example, you could prevent VMs from deploying if they are exposed to a public IP address. You
also could prevent a particular hard disk from being used when deploying VMs to control costs.
Initiative definitions simplify the process of managing and assigning policy definitions by grouping a set of
policies as one single item. For example, you could create an initiative named Enable Monitoring in Azure
Security Center, with a goal to monitor all the available security recommendations in your Azure Security
Center. Under this initiative, you would have the following policy definitions:
OHIBPROHIBITED 316  Module 5 Compliance and Security

Which facility from the below list, allows you to prevent accidental deletion of resources in Azure?
†† Key Vault
†† Azure virtual machines
†† Azure Blueprints
ITED

■■ Locks
Explanation
Locks is the correct answer. All other answers are incorrect.
Locks help you prevent accidental deletion or modification of your Azure resources. You can manage these
locks from within the Azure portal. To view, add, or delete locks, go to the SETTINGS section of any re-
source's settings blade.
You may need to lock a subscription, resource group, or resource to prevent other users in your organization
USE PRUSE

from accidentally deleting or modifying critical resources. You can set the lock level to CanNotDelete or
ReadOnly.
MCT
MC USE
T USE ONLY.
ONLY. STUDENT
STUDENT

You might also like