Professional Documents
Culture Documents
command:
Get-ADUser -Filter *
$today = Get-Date
$30DaysAgo = $today.AddDays(-30)
Now that you have the dates, you can use them in the filter: Get-ADUser -Filter
"passwordlastset -lt '$30DaysAgo'"
$today = Get-Date
$30DaysAgo = $today.AddDays(-30)
'$30DaysAgo'"
One way to find all available attributes stored in AD is with a little .NET. Using a schema
object, you can find the user class and enumerate all of its attributes:
$schema
=[DirectoryServices.ActiveDirectory.ActiveDirectorySchema]::GetCurrentSchema
()
$userClass = $schema.FindClass('user')
$userClass.GetAllProperties().Name
[CmdletBinding()]
param (
[Parameter(Mandatory)]
[string]$FirstName,
[Parameter(Mandatory)]
[string]$LastName,
[Parameter(Mandatory)]
[string]$Department,
[Parameter(Mandatory)]
[int]$EmployeeNumber
)
try {
} catch {
Write-Error -Message $_.Exception.Message
}
$GroupName
[System.Reflection.Assembly]::LoadWithPartialName("System.Web.HttpUtility")
$policyJson = [System.Web.HttpUtility]::UrlDecode($results.Document)
#$PolicyJson
#
# June 19, 2022
include Owner, Editor, and Viewer.
● Primitive roles
● Predefined roles
To export billing data to BigQuery, navigate to the Billing section of the console and
select Billing export from the menu. In the form that appears, select the billing account
you would like to export and choose either BigQuery Export or File Export (see Figure
3.17).
Understand the GCP resource hierarchy. All resources are organized within your
resource hierarchy. You can define the resource hierarchy using one organization and
multiple folders and projects. Folders are useful for grouping departments, and other
groups manage their projects separately. Projects contain resources such as VMs and
cloud storage buckets. Projects must have billing accounts associated with them to use
more than free services.
Custom images are especially useful if you have to configure an operating system and
install additional software on each instance of a VM that you run. Instead of repeatedly
configuring and installing software for each instance, you could configure and install
once and then create a custom image from the boot disk of the instance. You then
specify that custom image when you start other instances, which will have the
configuration and software available without any additional steps.
● Project
● Zone
● Machine type
● Image type Disk type
● Disk size
● Number of nodes
Rather than write this kind of command from scratch, you can use Cloud Console to
select a template and then use the option to generate the equivalent command line from
the Create Cluster form.
From the Cluster page of the Kubernetes Engine on Cloud Console, select Create
Deployment. A form such as the one in Figure 7.8 appears. Within this form you can
specify the following:
● Container image
● Environment variables
● Initial command
● Application name
● Labels
● Namespace
● Cluster to deploy to
FIGURE 7.8 The Create Deployment option provides a form to specify a container to
run and an initial command to start the application running.
Once you have specified a deployment, you can display the corresponding YAML
specification, which can be saved and used to create deployments from the command
line. Figure 7.9 shows an example deployment YAML file. The output is always
displayed in YAML format.
FIGURE 7.9 YAML specification for a Kubernetes deployment
In addition to installing Cloud SDK, you will need to install the Kubernetes
command-line tool kubectl to work with clusters from the command line. You can
do this with the following command:
If the Cloud SDK Manager is disabled, you may receive an error when
running gcloud components install kubectl. If that occurs, you can use the
component manager, following the instructions at https://cloud.google.com/sdk/install.
The Cloud SDK component manager works only if you don’t install SDK through
another package manager. If you want to use the component manager, you can install it
using one of these methods:
● https://cloud.google.com/sdk/downloads#versioned
● https://cloud.google.com/sdk/downloads#interactive
Additional packages are available in our deb and yum repos; all the same components
are available, and you just need to use your existing package manager to install them.
● https://cloud.google.com/sdk/downloads#apt-get
● https://cloud.google.com/sdk/downloads#yum
You can then use kubectl to run a Docker image on a cluster by using the kubectl run
command. Here’s an example:
Monitoring Kubernetes
Stackdriver is GCP’s comprehensive monitoring, logging, and alerting product. It can be
used to monitor Kubernetes clusters.
FIGURE 7.10 Expanding the Advanced Options in the Create Cluster dialog will show
two check boxes for enabling Stackdriver logging and monitoring.
To list the names and basic information of all clusters, use this command:
First, you need to ensure you have a properly configured kubeconfig file, which
contains information on how to communicate with the cluster API. Run the command
gcloud container clusters get-credentials with the name of a zone or
region and the name of a cluster. Here’s an example:
This will configure the kubeconfig file on a cluster named standard-cluster-1 in the
use-central1-a zone. Figure 8.17 shows an example output of that command, which
includes the status of fetching and setting authentication data.
You can list the nodes in a cluster using the following:
For more details about nodes and pods, use these commands:
To view the status of clusters from the command line, use the gcloud container
commands, but to get information about Kubernetes managed objects, like nodes, pods,
and containers, use the kubectl command.
Adding, Modifying, and Removing with Cloud SDK and Cloud Shell
The command to add or modify nodes is gcloud container clusters resize.
● cluster name
● node pool name
● cluster size
For example, assume you have a cluster named standard-cluster-1 running a node pool
called default-pool. To increase the size of the cluster from 3 to 5, use this command:
Viewing the Image Repository and Image Details with Cloud SDK and Cloud Shell
From the command line, you work with images in a registry using
gcloud container images commands. For example, to list the
contents of a registry, use this:
Adding, Modifying, and Removing Services with Cloud SDK and Cloud Shell
Use the kubectl get services command to list services. Figure
8.36 shows an example listing.
This is important, as the master talks to node instances, which in turn contain pods and those
pods contain the containers.
You could run Kubernetes clusters on GCP, AWS, Azure, or even on-premise. That's the big
appeal of Kubernetes.
GKE is merely a managed service for running Kubernetes clusters on the Google Cloud.
As we have already discussed, these GKE clusters have a master node running Kubernetes that
controls a set of node instances that have Kubelets and inside those Kubelets are individual
containers. Kubernetes can be considered an abbreviation for all of the container functionality
available on GCP.
Kubernetes is the orchestrator that runs on the master node, but really depends on the Kubelets
that are running on the pods. In effect, a pod is a collection of closely coupled containers, all of
which share the same underlying resources.
For instance, they will all have the same IP address and they can share disk volumes. They may
be a web server pod for instance, which could have one container for the server and then
containers for the logging and the matrix infrastructure. Pods are defined using configuration
files specified either in JSON or YAML. Each pod is, in effect, managed by a Kubelet, which is,
in turn, controlled by the master node.
The VM instances that are contained in a container cluster are of two types, as we have already
seen. There is one special instance that is a master, the one that run Kubernetes, and the
others are node instances that run Kubelets.
Set the default compute zone and the compute region as appropriate.
2. Use the gcloud container clusters create command to create a cluster running the
GKE. The name of our cluster is my-first-cluster and we want this cluster to have
exactly one node. This will be set up in the default zone and the default region that we
specified in our configuration, that is, us-central1-a:
3. The confirmation message on the command line that says we have one node in this
cluster and its current status is running. You can use the gcloud compute instances list
command to see the list of clusters and VM instances that you have running. Also, note
that with just one command GKE provisioned a fully-functional Kubernetes cluster. This
saves a lot of time and efforts of manually configuring and spinning-up k8s clusters.
4. Now that we have a cluster up and running, we will deploy WordPress to it. WordPress is
simply a content management system used for applications like blogging. We will deploy
a WordPress Docker container on our cluster. This WordPress Docker container is
publicly available as an image and can be accessed using --image=tutum/wordpress.
5. Use the command-line equivalent for working with Kubernetes clusters: the kubectl
command. We want this container with the WordPress application to run on port 80. This
WordPress image contains everything needed to run this WordPress site, including a
MySQL database:
Having deployed this container to your cluster, what you have created on your cluster is a
pod. A pod is basically one or more containers that go together as a group for some meaningful
deployment. In this case, for our WordPress container, there is one container within this pod.
You can use the kubectl get pods command to see the status of the pods that you have running.
We have one pod that starts with the term wordpress. There are zero out of one (0/1) pods
running and its current status is ContainerCreating, as shown in the following screenshot:
Storage and persistent disks
Recall that while working with Compute Engine instances, we had to choose from many storage
options, which included persistent disks that could either be ordinary or SSD, local SSD disks,
and Google Cloud Storage. Storage options on Kubernetes engine instances are not all that
different, but there is, however, one important subtlety. This has to do with the type of attached
disk. Recall that when you use the Compute Engine instance that comes along with an attached
disk, the link between a Compute Engine instance and the attached disk will remain for as long
as the instance exists and the same disk volume is going to be attached to the same instance until
the VM is deleted. This will be the case even if you detach the disk and use it with a different
instance.
However, when you are using containers, the on-disk files are ephemeral. If a container restarts
for instance, after a crash, whatever data that you have had in your disk files is going to be lost.
There is a way around this ephemeral nature of storage option, and that is by using a
persistent abstraction known as GCE persistent disks. If you are going to make use of
Kubernetes engine instances and want your data to not be ephemeral, but remain associated with
your containers, you have got to make use of this abstraction or your disk data will not be
persistent after a container restarts.
Dynamically provisioned storage classes use HDD by default but we can customize it and attach
an SSD to a user defined storage class. Notice the kind of the file as StorageClass. Here, GCE’s
persistent disk is the provisioner with the type SSD.
1. You can save it with the name ssd.yaml or something convenient for you:
nano ssd.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
2. Once it is saved, you can create a PVC (PersistentVolumeClaim). Let’s name it storage-
change.yaml. Notice that it has the name of our previously created storage class in the
StorageClassName:
nano storage-change.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: storage-change
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd
resources:
requests:
storage: 1Gi
3. Apply the storage change by running the following command. Make sure to run them
under the sequence given below since the storage class itself needs to be created first
before PVC:
Combining the two, that is, scaling in both directions allows you to make your application truly
elastic. Remember that pods are a Kubernetes concept, so unsurprisingly, such autoscaling is
carried out using kubectl rather than gcloud:
1. To do this, you need to apply a Horizontal Pod Autoscaler (HPA) by using the kubectl
autoscale command. The max flag is mandatory, while the min flag
2. is optional. The cpu-percent indicates total CPU utilization:
You can stop serving versions using the gcloud app versions stop command and
passing a list of versions to stop. For example, to stop serving versions named v1 and v2, use the
following:
All of the zones are within the same region. The following
diagram demonstrates this concept:
● --migrate indicates that App Engine should migrate traffic from the previous version
to the new version.
● --split-by specifies how to split traffic using either IP or cookies. Possible values are
ip, cookie, and random.
●
1. To configure the container registry in GCP, first of all, you need to tag a built image with
a registry name. The registry name format follows hostname/project_ID/image format.
The hostname is your Google container registry hostname. For example, us.gcr.io stores
your image in the US region:
2. Then you need to push the image to the Docker container registry using the gcloud
command line:
3. To test your registry, you can try pulling your image with the same registry tag format:
4. Finally, to delete it from the container registry, use the following command:
5. Now, if you are using Kubernetes, it has a built-in container registry to run with kubectl
so you can use public or private container images as well:
Assignments
For example, at the subscription scope, you can assign a definition that prevents the
creation of networking resources. You could exclude a resource group in that
subscription that is intended for networking infrastructure. You then grant access to this
networking resource group to users that you trust with creating networking resources.
In another example, you might want to assign a resource type allowlist definition at the
management group level. Then you assign a more permissive policy (allowing more
resource types) on a child management group or even directly on subscriptions.
However, this example wouldn't work because Azure Policy is an explicit deny system.
Instead, you need to exclude the child management group or subscription from the
management group-level assignment. Then, assign the more permissive definition on
the child management group or subscription level. If any assignment results in a
resource getting denied, then the only way to allow the resource is to modify the
denying assignment.
Policy assignments always use the latest state of their assigned definition or initiative
when evaluating resources. If a policy definition that is already assigned is changed all
existing assignments of that definition will use the updated logic when evaluating.
The first command specifies the ID of the management group to query. The second
command gets all of the policy assignments that are assigned to the management
group with ID 'myManagementGroup'.
Azure PowerShell
{
"if": {
<condition> | <logical operator>
},
"then": {
"effect":
"deny |
audit |
modify |
append |
auditIfNotExists |
deployIfNotExists |
disabled"
}
}
Can see we look up the Policy Assignment by name and store that
as well as the Policy Definition ID as this will be used later.