You are on page 1of 26

To confirm you have ActiveDirectory installed, you can use the Get-Module

command:

Get-Module -Name ActiveDirectory -List

Get-Command -Module ActiveDirectory -Verb Get -Noun *computer*

Get-Command -Module ActiveDirectory -Verb Get -Noun *user*

Get-Command -Module ActiveDirectory -Verb Get -Noun *group*

Get-ADUser -Filter *

Search-ADAccount -AccountInactive -TimeSpan 90.00:00:00 -UsersOnly

Get-ADUser -Filter "samAccountName -eq 'jjones'"

Get-ADUser -Identity jjones

Get-AdUser -Filter * -Properties passwordlastset | select name,passwordlastset

$today = Get-Date
$30DaysAgo = $today.AddDays(-30)
Now that you have the dates, you can use them in the filter: Get-ADUser -Filter
"passwordlastset -lt '$30DaysAgo'"

$today = Get-Date

$30DaysAgo = $today.AddDays(-30)

Get-ADUser -Filter "Enabled -eq 'True' -and passwordlastset –lt

'$30DaysAgo'"
One way to find all available attributes stored in AD is with a little .NET. Using a schema
object, you can find the user class and enumerate all of its attributes:

$schema
=[DirectoryServices.ActiveDirectory.ActiveDirectorySchema]::GetCurrentSchema
()
$userClass = $schema.FindClass('user')
$userClass.GetAllProperties().Name

Get-ADGroup -Identity AdamBertramLovers

#requires -Module ActiveDirectory

[CmdletBinding()]
param (
[Parameter(Mandatory)]
[string]$FirstName,

[Parameter(Mandatory)]
[string]$LastName,

[Parameter(Mandatory)]
[string]$Department,

[Parameter(Mandatory)]
[int]$EmployeeNumber
)

try {

} catch {
Write-Error -Message $_.Exception.Message
}

$userName = '{0}{1}' -f $FirstName.Substring(0, 1), $LastName


foreach( $GroupName in (Get-IAMGroupList).GroupName | sort) {

$GroupName

$Users = ((Get-IAMGroup -GroupName $GroupName).Users).UserName


$NumbofUsers = $Users.count

Write-Host "$GroupName : $Users : $NmbofUsers"


Write-Host ""
Get-IAMAttachedGroupPolicyList -GroupName $GroupName |Select -expand
PolicyName

# $Pcontent Get-IAMAttachedRolePolicies -RoleName $I |Select -expand


PolicyName

Get-IAMAttachedGroupPolicies -GroupName $GroupName

$GroupPolicyArns = Get-IAMAttachedGroupPolicies -GroupName


$GroupName |Select -expand PolicyArn

foreach( $GroupPolicyArn in $GroupPolicyArns) {

Write-Host "$GroupName : $GroupPolicyArn"


Write-Host ""
$VersionId = (Get-IAMPolicyVersions -PolicyArn
"$GroupPolicyArn").VersionId |select -First 1

Write-Host "$GroupName : $GroupPolicyArn : $VersionId"


Write-Host ""
$Results = (Get-IAMPolicyVersion -PolicyArn "$GroupPolicyArn" -
VersionId "$VersionId")

[System.Reflection.Assembly]::LoadWithPartialName("System.Web.HttpUtility")
$policyJson = [System.Web.HttpUtility]::UrlDecode($results.Document)

#$PolicyJson

try{Get-IAMGroup -GroupName $GroupName | sort |


ForEach-Object {
$Groups = [PSCustomObject]@{
Date = (Get-Date -UFormat "%B-%d-%Y|%T")
AWSName = $AWSName
GroupName = $GroupName
GroupCounts = $GroupCounts
Group = $_.Group
#Users = $_.User
Users = $Users
UsersInGroup = $NumofUsers
IsTruncated = $_.IsTruncated
Marker = $_.Marker
PolicyArn = $GroupPolicyArn
PolicyVersion = $VersionId
Policy = $Policy
Version = $Policy.Version
Statement = $Policy.Statement
Policysid = $Policy.Statement.Sid
PolicyEffect = $Policy.Statement.Effect
PolicyPrincipal = $Policy.Statement.Principal
PolicyAction = $Policy.Statement.Action
PolicyResource = $Policy.Statement.Resource
Policycondition = $Policy.Statement.Condition
Policyconditionbool = $Policy.Statement.Condttion.Bool
}
$Groups
$Group_Array =@()
$Group_Array = $Groups
$Group_Array | Export-Excel $GroupNameAMfilename -
WorksheetName 'GroupPolicy' -Append -AutoSize -NoNumberConversion *
if($?){"command succeeded"}else{"command failed"}
} # ForEach-Object {
} # try
catch [system.exception]{Write-Output "Error: "
$_.Exception.Message | Export-Excel $IAMfilename -WorksheetName 'ERROR' -
AutoSize -AutoFilter -Append } # Catch

#
# June 19, 2022
include Owner, Editor, and Viewer.
● Primitive roles

● Predefined roles

Custom roles allow cloud administrators to create and


administer their own roles. Custom roles are assembled
● Custom roles using permissions defined in IAM. While you can use
most permissions in a custom role, some, such as
iam.ServiceAccounts.getAccessToken, are not available
in custom roles.

Granting Roles to Identities


Once you have determined which roles you want to provide to users, you can assign
roles to users through the IAM console. It is important to know that permissions cannot
be assigned to users. They can be assigned only to roles. Roles are then assigned to
users.

Exporting Billing Data


You can export billing data for later analysis or for compliance reasons. Billing data can
be exported to either a BigQuery database or a Cloud Storage file.

To export billing data to BigQuery, navigate to the Billing section of the console and
select Billing export from the menu. In the form that appears, select the billing account
you would like to export and choose either BigQuery Export or File Export (see Figure
3.17).

Understand the GCP resource hierarchy. All resources are organized within your
resource hierarchy. You can define the resource hierarchy using one organization and
multiple folders and projects. Folders are useful for grouping departments, and other
groups manage their projects separately. Projects contain resources such as VMs and
cloud storage buckets. Projects must have billing accounts associated with them to use
more than free services.

Custom images are especially useful if you have to configure an operating system and
install additional software on each instance of a VM that you run. Instead of repeatedly
configuring and installing software for each instance, you could configure and install
once and then create a custom image from the boot disk of the instance. You then
specify that custom image when you start other instances, which will have the
configuration and software available without any additional steps.

Deploying Kubernetes Clusters Using Cloud Shell and Cloud SDK


Like other GCP services, Kubernetes Engine can be managed using the command line.
The basic command for working with Kubernetes Engine is the following gcloud
command:

gcloud beta container


Notice that Kubernetes Engine commands include the word beta. Google indicates a
service that is not yet in general availability by including the word alpha or beta in the
gcloud command. By the time you read this, Kubernetes Engine may be generally
available, in which case the beta term will no longer be used.

This gcloud command has many parameters, including the following:

● Project
● Zone
● Machine type
● Image type Disk type
● Disk size
● Number of nodes

A basic command for creating a cluster looks like this:

gcloud container clusters create ch07-cluster --num-nodes=3 --region=us-central1


Commands for creating clusters can become quite long. For example, here is the
command to create a cluster using the standard template:

gcloud beta container --project "ferrous-depth-220417" clusters create "standard-


cluster-2" --zone "us-central1-a" --username "admin" --cluster-version "1.9.7-gke.6" --
machine-type "n1-standard-1" --image-type "COS" --disk-type "pd-standard" --disk-
size "100" --scopes
"https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/
devstorage.read_only","https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/
servicecontrol","https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/trace.append" --num-nodes "3" --enable-cloud-
logging --enable-cloud-monitoring --network
"projects/ferrous-depth-220417/global/networks/default" --subnetwork "projects/ferrous-
depth-220417/regions/us-central1/subnetworks/default" --addons
HorizontalPodAutoscaling, HttpLoadBalancing,KubernetesDashboard --enable-
autoupgrade --enable-autorepair

Rather than write this kind of command from scratch, you can use Cloud Console to
select a template and then use the option to generate the equivalent command line from
the Create Cluster form.

Deploying Application Pods

Now that you have created a cluster, let’s deploy an application.

From the Cluster page of the Kubernetes Engine on Cloud Console, select Create
Deployment. A form such as the one in Figure 7.8 appears. Within this form you can
specify the following:

● Container image
● Environment variables
● Initial command
● Application name
● Labels
● Namespace
● Cluster to deploy to
FIGURE 7.8 The Create Deployment option provides a form to specify a container to
run and an initial command to start the application running.

Once you have specified a deployment, you can display the corresponding YAML
specification, which can be saved and used to create deployments from the command
line. Figure 7.9 shows an example deployment YAML file. The output is always
displayed in YAML format.
FIGURE 7.9 YAML specification for a Kubernetes deployment
In addition to installing Cloud SDK, you will need to install the Kubernetes
command-line tool kubectl to work with clusters from the command line. You can
do this with the following command:

gcloud components install kubectl

 If the Cloud SDK Manager is disabled, you may receive an error when
running gcloud components install kubectl. If that occurs, you can use the
component manager, following the instructions at https://cloud.google.com/sdk/install.

The Cloud SDK component manager works only if you don’t install SDK through
another package manager. If you want to use the component manager, you can install it
using one of these methods:

● https://cloud.google.com/sdk/downloads#versioned
● https://cloud.google.com/sdk/downloads#interactive

Additional packages are available in our deb and yum repos; all the same components
are available, and you just need to use your existing package manager to install them.

● https://cloud.google.com/sdk/downloads#apt-get
● https://cloud.google.com/sdk/downloads#yum

You can then use kubectl to run a Docker image on a cluster by using the kubectl run
command. Here’s an example:

kubectl run ch07-app-deploy --image=ch07-app --port=8080


This will run a Docker image called ch07-app and make its network accessible on port
8080. If after some time you’d like to scale up the number of replicas in the deployment,
you can use the kubectl scale command:

kubectl scale deployment ch07-app-deploy --replicas=5


This example would create five replicas.

Monitoring Kubernetes
Stackdriver is GCP’s comprehensive monitoring, logging, and alerting product. It can be
used to monitor Kubernetes clusters.

When creating a cluster, be sure to enable Stackdriver monitoring and logging by


selecting Advanced Options in the Create Cluster form in Cloud Console. Under
Additional Features, choose Enable Logging Service and Enable Monitoring Service, as
shown in Figure 7.10.

FIGURE 7.10 Expanding the Advanced Options in the Create Cluster dialog will show
two check boxes for enabling Stackdriver logging and monitoring.

To list the names and basic information of all clusters, use this command:

gcloud container clusters list

gcloud container clusters describe --zone us-central1-a standard-


cluster-1

First, you need to ensure you have a properly configured kubeconfig file, which
contains information on how to communicate with the cluster API. Run the command
gcloud container clusters get-credentials with the name of a zone or
region and the name of a cluster. Here’s an example:

gcloud container clusters get-credentials --zone us-central1-a


standard-cluster-1

This will configure the kubeconfig file on a cluster named standard-cluster-1 in the
use-central1-a zone. Figure 8.17 shows an example output of that command, which
includes the status of fetching and setting authentication data.
You can list the nodes in a cluster using the following:

kubectl get nodes

Similarly, to list pods, use the following command:

kubectl get pods

For more details about nodes and pods, use these commands:

kubectl describe nodes

kubectl describe pods

To view the status of clusters from the command line, use the gcloud container
commands, but to get information about Kubernetes managed objects, like nodes, pods,
and containers, use the kubectl command.

Adding, Modifying, and Removing with Cloud SDK and Cloud Shell
The command to add or modify nodes is gcloud container clusters resize.

The command takes three parameters, as shown here:

● cluster name
● node pool name
● cluster size

For example, assume you have a cluster named standard-cluster-1 running a node pool
called default-pool. To increase the size of the cluster from 3 to 5, use this command:

gcloud container clusters resize standard-cluster-1 --node-pool


default-pool --size 5 --region=us-central1

To list deployments, use the following command:


kubectl get deployments

To add and remove pods, change the configuration of deployments


using the kubectl scale deployment command. For this command, you
have to specify the deployment name and number of replicas. For
example, to set the number of replicas to 5 for a deployment
named nginx-1, use this:

kubectl scale deployment nginx-1 --replicas 5

To have Kubernetes manage the number of pods based on load, use


the autoscale command. The following command will add or remove
pods as needed to meet demand based on CPU utilization. If CPU
usage exceeds 80 percent, up to 10 additional pods or replicas
will be added. The deployment will always have at least one pod
or replica.

kubectl autoscale deployment nginx-1 --max 10 --min 1 --cpu-


percent 80

To remove a deployment, use the delete deployment command like


so:

kubectl delete deployment nginx-1

Viewing the Image Repository and Image Details with Cloud SDK and Cloud Shell
From the command line, you work with images in a registry using
gcloud container images commands. For example, to list the
contents of a registry, use this:

gcloud container images list

Deploying an App Using Cloud Shell and SDK


First, you will work in a terminal window using Cloud Shell,
which you can start from the console by clicking the Cloud Shell
icon. Make sure gcloud is configured to work with App Engine by
using the following command:

gcloud components install app-engine-python

gcloud app versions stop v1 v2

Adding, Modifying, and Removing Services with Cloud SDK and Cloud Shell
Use the kubectl get services command to list services. Figure
8.36 shows an example listing.

gcloud container images list


Notice how in the cluster, we have one supervising machine, which is the block at the bottom
running Kubernetes, known as the master endpoint. This master endpoint is in touch with a
number of individual container machines, each running containers and each talking or
communicating with the master using a piece of software known as a Kubelet. In the world of
the GKE, that coordinating machine is known as the master node, running Kubernetes. Each of
the other VMs in the cluster is known as a node instance. Each node instance has its own Kubelet
talking to the master and atop, which runs a pod. A Pod is the atomic unit of kubernetes cluster
orchestration.

Inside each pod there can be one or multiple containers.

This is important, as the master talks to node instances, which in turn contain pods and those
pods contain the containers.

You could run Kubernetes clusters on GCP, AWS, Azure, or even on-premise. That's the big
appeal of Kubernetes.

GKE is merely a managed service for running Kubernetes clusters on the Google Cloud.

As we have already discussed, these GKE clusters have a master node running Kubernetes that
controls a set of node instances that have Kubelets and inside those Kubelets are individual
containers. Kubernetes can be considered an abbreviation for all of the container functionality
available on GCP.

Kubernetes is the orchestrator that runs on the master node, but really depends on the Kubelets
that are running on the pods. In effect, a pod is a collection of closely coupled containers, all of
which share the same underlying resources.

For instance, they will all have the same IP address and they can share disk volumes. They may
be a web server pod for instance, which could have one container for the server and then
containers for the logging and the matrix infrastructure. Pods are defined using configuration
files specified either in JSON or YAML. Each pod is, in effect, managed by a Kubelet, which is,
in turn, controlled by the master node.

The VM instances that are contained in a container cluster are of two types, as we have already
seen. There is one special instance that is a master, the one that run Kubernetes, and the
others are node instances that run Kubelets.
Set the default compute zone and the compute region as appropriate.

gcloud config set compute/zone ZONE

gcloud config set compute/region REGION

2. Use the gcloud container clusters create command to create a cluster running the
GKE. The name of our cluster is my-first-cluster and we want this cluster to have
exactly one node. This will be set up in the default zone and the default region that we
specified in our configuration, that is, us-central1-a:

gcloud container clusters create my-first-cluster --num-nodes 1

3. The confirmation message on the command line that says we have one node in this
cluster and its current status is running. You can use the gcloud compute instances list
command to see the list of clusters and VM instances that you have running. Also, note
that with just one command GKE provisioned a fully-functional Kubernetes cluster. This
saves a lot of time and efforts of manually configuring and spinning-up k8s clusters.
4. Now that we have a cluster up and running, we will deploy WordPress to it. WordPress is
simply a content management system used for applications like blogging. We will deploy
a WordPress Docker container on our cluster. This WordPress Docker container is
publicly available as an image and can be accessed using --image=tutum/wordpress.
5. Use the command-line equivalent for working with Kubernetes clusters: the kubectl
command. We want this container with the WordPress application to run on port 80. This
WordPress image contains everything needed to run this WordPress site, including a
MySQL database:

kubectl run wordpress --image=tutum/wordpress --port=80

Having deployed this container to your cluster, what you have created on your cluster is a
pod. A pod is basically one or more containers that go together as a group for some meaningful
deployment. In this case, for our WordPress container, there is one container within this pod.

You can use the kubectl get pods command to see the status of the pods that you have running.
We have one pod that starts with the term wordpress. There are zero out of one (0/1) pods
running and its current status is ContainerCreating, as shown in the following screenshot:
Storage and persistent disks
Recall that while working with Compute Engine instances, we had to choose from many storage
options, which included persistent disks that could either be ordinary or SSD, local SSD disks,
and Google Cloud Storage. Storage options on Kubernetes engine instances are not all that
different, but there is, however, one important subtlety. This has to do with the type of attached
disk. Recall that when you use the Compute Engine instance that comes along with an attached
disk, the link between a Compute Engine instance and the attached disk will remain for as long
as the instance exists and the same disk volume is going to be attached to the same instance until
the VM is deleted. This will be the case even if you detach the disk and use it with a different
instance.

However, when you are using containers, the on-disk files are ephemeral. If a container restarts
for instance, after a crash, whatever data that you have had in your disk files is going to be lost.
There is a way around this ephemeral nature of storage option, and that is by using a
persistent abstraction known as GCE persistent disks. If you are going to make use of
Kubernetes engine instances and want your data to not be ephemeral, but remain associated with
your containers, you have got to make use of this abstraction or your disk data will not be
persistent after a container restarts.

Dynamically provisioned storage classes use HDD by default but we can customize it and attach
an SSD to a user defined storage class. Notice the kind of the file as StorageClass. Here, GCE’s
persistent disk is the provisioner with the type SSD.

1. You can save it with the name ssd.yaml or something convenient for you:

nano ssd.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd

2. Once it is saved, you can create a PVC (PersistentVolumeClaim). Let’s name it storage-
change.yaml. Notice that it has the name of our previously created storage class in the
StorageClassName:

nano storage-change.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: storage-change
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd
resources:
requests:
storage: 1Gi

3. Apply the storage change by running the following command. Make sure to run them
under the sequence given below since the storage class itself needs to be created first
before PVC:

kubectl apply -f ssd.yaml


kubectl apply -f storage-change.yaml

gcloud container clusters create [CLUSTER-NAME] --num-nodes=5 \

--enable-autoscaling --min-nodes=3 --max-nodes=10 [--zone=[ZONE] \ --


project=[PROJECT-ID]]

Scaling pods with the horizontal pod autoscaler


It is also possible in recent versions of Kubernetes to scale pods (rather than nodes). This also
allows us to scale down to a node pool of zero, that is to scale down to zero if there is no demand
for the service while also retaining the ability to scale up with demand.

Combining the two, that is, scaling in both directions allows you to make your application truly
elastic. Remember that pods are a Kubernetes concept, so unsurprisingly, such autoscaling is
carried out using kubectl rather than gcloud:

1. To do this, you need to apply a Horizontal Pod Autoscaler (HPA) by using the kubectl
autoscale command. The max flag is mandatory, while the min flag
2. is optional. The cpu-percent indicates total CPU utilization:

kubectl autoscale deployment my-app --max=6 --min=4 --cpu-percent=50


2. To check any HPA, use the kubectl describe command:

kubectl describe hpa [NAME-OF-HPA]

3. Similarly, use the kubectl delete command to remove HPAs:

kubectl delete hpa [NAME-OF-HPA]

You can stop serving versions using the gcloud app versions stop command and
passing a list of versions to stop. For example, to stop serving versions named v1 and v2, use the
following:

gcloud app versions stop v1 v2

All of the zones are within the same region. The following
diagram demonstrates this concept:

The command to split traffic is gcloud app services set-traffic.


Here’s an example:

gcloud app services set-traffic serv1 --splits v1=.4,v2=.6


This will split traffic with 40 percent going to version 1 of the
service named serv1 and 60 percent going to version 2. If no
service name is specified, then all services are split.

The gcloud app services set-traffic command takes the following


parameters:

● --migrate indicates that App Engine should migrate traffic from the previous version
to the new version.
● --split-by specifies how to split traffic using either IP or cookies. Possible values are
ip, cookie, and random.

The container registry


The container registry provides secure, private Docker image storage on GCP. It can be thought
of as a way to access, that is, to push, pull, and manage Docker images from any system,
whether it's a Compute Engine instance or your on-premise hardware through a secure HTTPS
endpoint. In a sense, this is a very controlled and regulated way of dealing with container
images.
You should be aware that it is possible to hook up Docker and the container registry so that
they talk to each other directly. In this way, by using the Docker command line, which is the
credential helper tool, you can authenticate Docker directly from the container registry. Since
this container registry can be written to and read from pretty much any system, you could also
use third-party cluster management or CI/CD tools and even those that are not on GCP. While
Docker provides a central registry to store public images, you may not want your images to be
accessible to the world. In this case, you must use a private registry:

1. To configure the container registry in GCP, first of all, you need to tag a built image with
a registry name. The registry name format follows hostname/project_ID/image format.
The hostname is your Google container registry hostname. For example, us.gcr.io stores
your image in the US region:

docker tag nginx us.gcr.io/loonycorn-project-08/nginx

2. Then you need to push the image to the Docker container registry using the gcloud
command line:

gcloud docker -- push us.gcr.io/loonycorn-project-08/nginx

3. To test your registry, you can try pulling your image with the same registry tag format:

gcloud docker -- pull us.gcr.io/loonycorn-project-08/nginx

4. Finally, to delete it from the container registry, use the following command:

gcloud container images delete us.gcr.io/loonycorn-project-08/nginx

5. Now, if you are using Kubernetes, it has a built-in container registry to run with kubectl
so you can use public or private container images as well:

kubectl run nginx --image=us.gcr.io/loonycorn-project-08/nginx

Assignments

An assignment is a policy definition or initiative that has been assigned to a specific


scope.(resources, resource groups, subscriptions, or management groups) This scope
could range from a management group to an individual resource. The term scope refers
to all the resources, resource groups, subscriptions, or management groups that the
definition is assigned to. Assignments are inherited by all child resources. This
design means that a definition applied to a resource group is also applied to
resources in that resource group. However, you can exclude a subscope from the
assignment.

For example, at the subscription scope, you can assign a definition that prevents the
creation of networking resources. You could exclude a resource group in that
subscription that is intended for networking infrastructure. You then grant access to this
networking resource group to users that you trust with creating networking resources.

In another example, you might want to assign a resource type allowlist definition at the
management group level. Then you assign a more permissive policy (allowing more
resource types) on a child management group or even directly on subscriptions.
However, this example wouldn't work because Azure Policy is an explicit deny system.
Instead, you need to exclude the child management group or subscription from the
management group-level assignment. Then, assign the more permissive definition on
the child management group or subscription level. If any assignment results in a
resource getting denied, then the only way to allow the resource is to modify the
denying assignment.

Policy assignments always use the latest state of their assigned definition or initiative
when evaluating resources. If a policy definition that is already assigned is changed all
existing assignments of that definition will use the updated logic when evaluating.

Example 2: Get a specific policy assignment

$ResourceGroup = Get-AzResourceGroup -Name 'ResourceGroup11'

Get-AzPolicyAssignment -Name 'PolicyAssignment07' -Scope


$ResourceGroup.ResourceId

Example 2: Get a specific policy assignment

$ResourceGroup = Get-AzResourceGroup -Name 'ResourceGroup11'


Get-AzPolicyAssignment -Name 'PolicyAssignment07' -Scope
$ResourceGroup.ResourceId

Example 3: Get all policy assignments assigned to a management group


$mgId = 'myManagementGroup'
Get-AzPolicyAssignment -Scope
'/providers/Microsoft.Management/managementgroups/$mgId'

The first command specifies the ID of the management group to query. The second
command gets all of the policy assignments that are assigned to the management
group with ID 'myManagementGroup'.

Azure PowerShell

Start-AzPolicyComplianceScan -ResourceGroupName 'MyRG'

{
"if": {
<condition> | <logical operator>
},
"then": {
"effect":
"deny |
audit |
modify |
append |
auditIfNotExists |
deployIfNotExists |
disabled"
}
}

Store Policy Assignment Information

To store the policy assignment information, we will use the Get-


AzPolicyAssignment PowerShell module:

$PolicyAssignmentName = 'sbx - Assign Required Date Diff


Validation delete-by'
$PolicyAssignment = Get-AzPolicyAssignment -Name
$PolicyAssignmentName
$PolicyDefinition = Get-AzPolicyDefinition -Id
$PolicyAssignment.Properties.PolicyDefinitionId

The $PolicyParameterObject is a hashtable of parameters the


Policy Assignment expects.

Can see we look up the Policy Assignment by name and store that
as well as the Policy Definition ID as this will be used later.

You might also like