You are on page 1of 104

Google.kwiklabs.

com
Github repo for training

https://st1.zoom.us/web_client/f3jfhf/html/externalLinkPage.html?ref=https://github.com/DDLSTraining/Engage/
blob/master/Google-Cloud/Fundamentals/README.md
https://ddls.com.au/courses/google/

Lots of people with other cloud exp,looking at capabilities of GCP. CSIRO, ATO, ATSB, Home affairs,
Gartner hype cycle: $$$ and poor performance unless you know what you’re doing

Web app, auto deploy, architecture decisions automated & built in-> submit -. On prem or cloud? BOOM!

GKub engine: can pull resources off GCP easily.

ApEngine: Node/Go “runtime”

Managed services: google maps, vision,

See github pages for more details


Zones
DC -> lots of racks of servers.

Deploy .>1 DC in one region. Multiple zones in a region 200km @10ms latency between DCs.

Isolate common points of failure.

DCs deployed on different tectonic plates, under different flight paths, political climates, everything is considered

Some resources are zonal (eg VM, have to choose which zone it’s in).

Some resources are regional, someglobal (public IP addr)

ISO 14001 certs

cOmmitted use “reserved instances”

pre-emptible instances “spot instances”

#TODO get cross-cloud glossary

https://cloud.google.com/free/docs/aws-azure-gcp-service-comparison

Heres an interactive network chart with cross referenced terminology for each part of your network
https://lucid.app/lucidchart/13fde51a-271f-456a-b2b3-ef6869f9ee6a/view?page=GiFLkXTb1E1J#
Grant Carthew opinoni: globally scalable IP application: GCP

Kubernetes (Borg internally) for orchestration.


Bigtable: non relational db. Used for Gmail, GSearch.

Cloud storage: blobs,

Spanner: globally scalable ACID compatible db. Not cheap.

Firestore: NoSQL, good for documents, web apps

Bigquery: often used with other cloud providers. Data warehouse.

PubSub: msg queuing

Dataflow: streaming data & transforamation

Dataproc: Hadoop

AI solutions, jupyter.

AI platform build own ML models, or use vision TS, translation, AutoML


Do have a budget control option (?unlike AWS)

Can export billing to bigquery & analyse

Have quotas – safety stop for you.


<- do restrict your bill.

Pricing: sub hour, pre-emptive, committed use/sustained usage, custom machine types

Getting started
Need to make new trial accounts

Projects & resources level in class:

Org > folders > projects > resources [management framework & policies\

All resources exist within a project.

Other providers, delete resource group, it deletes them. IN GC, if you delete, it shuts down & keeps it for 30 days.
So you can’t delete PROD (owner gets email about it, and can restore project.) Like recycle bin.
?can you merge organisations? Have to migrate projects across. Fairly easy, custom roles can make things tricky.

?can share resources between projects, thereby sharing costs.

Q: What is the best practice to track resources usage for sub-project?

A: For tracking resources that are "grouped" within the project, labels is the easiest way we've found
Google cloud console

Common tools at top.

FORWARD SLASH takes you to search bar. Easier to search.

Cloud shell tool

Console focusses on project, rather than seeing multiple resources from multiple projects.

Project name in URL, CAN have different projects in different tabs.


Naming projects
Can’t change ID later. ID is globally unique

EG so make it ACTH-prod-001 etc. Just “prod” is already being used.

project number assigned by Google.


Cloud identity and access management
Regions apply to resources
Cloud ID: ability to start using external directories eg AD
Billing admin: can assign credit cards to projects. Need to contact BA to assign billing to that project.

Predefined roles apply to a specific service.

Can also create custom roles: individual services & permissions.


Other providers work differently

^ these accounts are defaults, created automatically.


Service accounts can access multiple projects.
Cloud Identity
Best to use this, don’t use Gmail for management.

Can use Gmail, not a good idea.


CI, workspace account is better. Enables you to delegate rights/permissions to users logging in with Google accounts

One way sync, creates accounts in Google.

Cloud ID is a Id as a servicething you create, then bring projects in underneath.


Everything uses REST underneath.

https://console.cloud.google.com/ 90 day trial $350

Cloud shell

uname -a display the machine name.

gcloud computer instances list

Install GCloud to get gcloud command.

choco install gcloudsdk


Not really an SDK, more a CLI.

New browser tab

https://shell.cloud.google.com/?show=ide%2Cterminal

Needs 3rd party cookies enabled to run properly.

Can interactively run APIs, need appname/ID, auth key etc. in google API explorer.
Some apps are already containerised -> Kubernetes. Bitnami – deploys updated open source containers to cloud

Deploy the app, may need to authorise the API.

FALSE

All GCP resources are assoc with a project: TRUE


E

Lab environment
New temporary account within Google Cloud.

Lab 1
Overview
In this lab, you use Cloud Marketplace to quickly and easily deploy a LAMP stack on a Compute Engine instance. The
Bitnami LAMP Stack provides a complete web development environment for Linux that can be launched in one click.

Component Role
Linux Operating system
Apache HTTP Server Web server
MySQL Relational database
PHP Web application framework
phpMyAdmin PHP administration tool
For more information on the Bitnami LAMP stack, see Bitnami LAMP Stack Documentation.

Objectives
In this lab, you learn how to launch a solution using Cloud Marketplace.

Task 1: Sign in to the Google Cloud Platform (GCP) Console


For each lab, you get a new GCP project and set of resources for a fixed time at no cost.
Make sure you signed into Qwiklabs using an incognito window.

Note the lab's access time (for example, img/time.png and make sure you can finish in that time block.

There is no pause feature. You can restart if needed, but you have to start at the beginning.

When ready, click img/start_lab.png.

Note your lab credentials. You will use them to sign in to Cloud Platform Console. img/open_google_console.png

Click Open Google Console.

Click Use another account and copy/paste credentials for this lab into the prompts.

If you use other credentials, you'll get errors or incur charges.

Accept the terms and skip the recovery resource page.


Do not click End Lab unless you are finished with the lab or want to restart it. This clears your work and removes the
project.

Task 2: Use Cloud Marketplace to deploy a LAMP stack


In the GCP Console, on the Navigation menu (Navigation menu), click Marketplace.

In the search bar, type LAMP

In the search results, click LAMP Certified by Bitnami.

If you choose another LAMP stack, such as the Google Click to Deploy offering, the lab instructions will not work as
expected.

On the LAMP page, click Launch.

If this is your first time using Compute Engine, the Compute Engine API must be initialized before you can continue.

For Zone, select the deployment zone that Qwiklabs assigned to you.

Leave the remaining settings as their defaults.

If you are prompted to accept the GCP Marketplace Terms of Service, do so.
Click Deploy.

If a Welcome to Deployment Manager message appears, click Close to dismiss it.

The status of the deployment appears in the console window: lampstack-1 is being deployed. When the deployment
of the infrastructure is complete, the status changes to lampstack-1 has been deployed.

After the software is installed, a summary of the details for the instance, including the site address, is
displayed.When the deployment is complete, click the Site address link in the right pane. (If the website is not
responding, wait 30 seconds and try again.) If you see a redirection notice, click on that link to view your new site.

Alternatively, you can click Visit the site in the Get started with LAMP Certified by Bitnami section of the page. A new
browser tab displays a congratulations message. This page confirms that, as part of the LAMP stack, the Apache
HTTP Server is running.

Close the congratulations browser tab.

On the GCP Console, under Get started with LAMP Certified by Bitnami, click SSH.

In a new window, a secure login shell session on your virtual machine appears.

In the just-created SSH window, to change the current working directory to /opt/bitnami, execute the following
command:

cd /opt/bitnami
To copy the phpinfo.php script from the installation directory to a publicly accessible location under the web server
document root, execute the following command:

sudo sh -c 'echo "<?php phpinfo(); ?>" > apache2/htdocs/phpinfo.php'


The phpinfo.php script displays your PHP configuration. It is often used to verify a new PHP installation.

To close the SSH window, execute the following command:

exit
Open a new browser tab.

Type the following URL, and replace SITE_ADDRESS with the URL in the Site address field in the right pane of the
lampstack page.

http://SITE_ADDRESS/phpinfo.php
A summary of the PHP configuration of your server is displayed.
Close the phpinfo tab.

Congratulations!
In this lab, you deployed a LAMP stack to a Compute Engine instance.

GCloud
Gcloud init (use powershell for this)

Virtual Machines in the cloud


Labels for resources, specify region

Choose zone and region


Machine family gives you compute options. EPYC, Skylake. No ARM series.

Shared core is timeslicing core with other customers.

Confidentiality: encrypt RAM

Boot disk is what OS you want. Linux is the best OS for cloud. .net, use .net core instead.

Identity & API access

Lots of management options at the bottom

Can SSH into a device from the console (GC already knows who we are since we’re using the console)
Preemptible instances shutdown with 30s notice. Need a stateless scalable resource (no db) in an instance group.
Default network with subnets in all regions

Can config routes once network is set up (below is automatic network)


Setting up a route

Highly available public IP address. Anycast to local entry point on nearest google network.
Proxies receive traffic & forward

100% uptime!

used to be able to use 3rd party CDNs


DNS, CDN, load balancing,

Preview of price, per-second pricing, recommendation for saving money, custom machine types

True

Lab 2
Overview
In this lab, you will create virtual machines (VMs) and connect to them. You will also create connections between the
instances.

Objectives
In this lab, you will learn how to perform the following tasks:

Create a Compute Engine virtual machine using the Google Cloud Platform (GCP) Console.

Create a Compute Engine virtual machine using the gcloud command-line interface.

Connect between the two instances.

Task 1: Sign in to the Google Cloud Platform (GCP) Console


For each lab, you get a new GCP project and set of resources for a fixed time at no cost.

Make sure you signed into Qwiklabs using an incognito window.

Note the lab's access time (for example, img/time.png and make sure you can finish in that time block.

There is no pause feature. You can restart if needed, but you have to start at the beginning.

When ready, click img/start_lab.png.

Note your lab credentials. You will use them to sign in to Cloud Platform Console. img/open_google_console.png

Click Open Google Console.


Click Use another account and copy/paste credentials for this lab into the prompts.

If you use other credentials, you'll get errors or incur charges.

Accept the terms and skip the recovery resource page.


Do not click End Lab unless you are finished with the lab or want to restart it. This clears your work and removes the
project.

Task 2: Create a virtual machine using the GCP Console


In the Navigation menu (Navigation menu), click Compute Engine > VM instances.
Click Create.
On the Create an Instance page, for Name, type my-vm-1.
For Region and Zone, select the region and zone assigned by Qwiklabs.
For Machine type, accept the default.
For Boot disk, if the Image shown is not Debian GNU/Linux 10 (Buster), click Change and select Debian GNU/Linux 10
(Buster).
Leave the defaults for Identity and API access unmodified.
For Firewall, click Allow HTTP traffic.
Leave all other defaults unmodified.
To create and launch the VM, click Create.
Note: The VM can take about two minutes to launch and be fully available for use.

Click Check my progress to verify the objective.


Create a virtual machine using the GCP Console

Task 3: Create a virtual machine using the gcloud command line


In GCP console, on the top right toolbar, click the Open Cloud Shell button.

Cloud Shell icon

Click Continue. cloudshell_continue.png


To display a list of all the zones in the region to which Qwiklabs assigned you, enter this partial command gcloud
compute zones list | grep followed by the region that Qwiklabs or your instructor assigned you to.

Your completed command will look like this:

gcloud compute zones list | grep us-central1


Welcome to Cloud Shell! Type "help" to get started.
Your Cloud Platform project in this session is set to qwiklabs-gcp-02-6277a455aebe.
Use “gcloud config set project [PROJECT_ID]” to change to a different project.
student_02_404ffde2ae28@cloudshell:~ (qwiklabs-gcp-02-6277a455aebe)$ gcloud compute zones list | grep us-
central1
us-central1-c us-central1 UP
us-central1-a us-central1 UP
us-central1-f us-central1 UP
us-central1-b us-central1 UP
student_02_404ffde2ae28@cloudshell:~ (qwiklabs-gcp-02-6277a455aebe)$
Choose a zone from that list other than the zone to which Qwiklabs assigned you. For example, if Qwiklabs assigned
you to region us-central1 and zone us-central1-a you might choose zone us-central1-b.

To set your default zone to the one you just chose, enter this partial command gcloud config set compute/zone
followed by the zone you chose.

Your completed command will look like this:

gcloud config set compute/zone us-central1-b


To create a VM instance called my-vm-2 in that zone, execute this command:

gcloud compute instances create "my-vm-2" \


--machine-type "n1-standard-1" \
--image-project "debian-cloud" \
--image-family "debian-10" \
--subnet "default"

Note: The VM can take about two minutes to launch and be fully available for use.

To close the Cloud Shell, execute the following command:

exit
Click Check my progress to verify the objective.
Create a virtual machine using the gcloud command line
Task 4: Connect between VM instances
In the Navigation menu (Navigation menu), click Compute Engine > VM instances.

You will see the two VM instances you created, each in a different zone.

Notice that the Internal IP addresses of these two instances share the first three bytes in common. They reside on
the same subnet in their Google Cloud VPC even though they are in different zones.

To open a command prompt on the my-vm-2 instance, click SSH in its row in the VM instances list.

Use the ping command to confirm that my-vm-2 can reach my-vm-1 over the network:

ping my-vm-1.us-central1-a
Notice that the output of the ping command reveals that the complete hostname of my-vm-1 is my-vm-1.us-
central1-a.c.PROJECT_ID.internal, where PROJECT_ID is the name of your Google Cloud Platform project. GCP
automatically supplies Domain Name Service (DNS) resolution for the internal IP addresses of VM instances.

Press Ctrl+C to abort the ping command.

Use the ssh command to open a command prompt on my-vm-1:

ssh my-vm-1.us-central1-a
If you are prompted about whether you want to continue connecting to a host with unknown authenticity, enter yes
to confirm that you do.

At the command prompt on my-vm-1, install the Nginx web server:

sudo apt-get install nginx-light -y


Use the nano text editor to add a custom message to the home page of the web server:

sudo nano /var/www/html/index.nginx-debian.html


Use the arrow keys to move the cursor to the line just below the h1 header. Add text like this, and replace
YOUR_NAME with your name:

Hi from YOUR_NAME
Press Ctrl+O and then press Enter to save your edited file, and then press Ctrl+X to exit the nano text editor.

Confirm that the web server is serving your new page. At the command prompt on my-vm-1, execute this command:

curl http://localhost/
The response will be the HTML source of the web server's home page, including your line of custom text.

To exit the command prompt on my-vm-1, execute this command:

exit
You will return to the command prompt on my-vm-2

To confirm that my-vm-2 can reach the web server on my-vm-1, at the command prompt on my-vm-2, execute this
command:

curl http://my-vm-1.us-central1-a/
The response will again be the HTML source of the web server's home page, including your line of custom text.

In the Navigation menu (Navigation menu), click Compute Engine > VM instances.

Copy the External IP address for my-vm-1 and paste it into the address bar of a new browser tab. You will see your
web server's home page, including your custom text.

If you forgot to click Allow HTTP traffic when you created the my-vm-1 VM instance, your attempt to reach your web
server's home page will fail. You can add a firewall rule to allow inbound traffic to your instances, although this topic
is out of scope for this course.

Congratulations!
In this lab, you created virtual machine (VM) instances in two different zones and connected to them using ping, ssh,
and HTTP.
Storage in the cloud

Cloud storage, S3, Storage account/container/blob/AZ


Storage class:

Not as easy to do static HTML web hosting: bucket must have the same name as the domain name.

Buckets have retention policies, lifecycle management, versioning.


Durability <> availability

https://uptime.is/
Look at rclone or gsutil

https://github.com/rclone/rclone https://rclone.org/

Cross-platform copying of content


Cloud bigtable

“non-relational” dbs, as many now support SQL.

Lots of traffic with low latency. Hadoop API.

Cloud SQL and Cloud Spanner


If you go with a non-rel db, the db complexity goes into the developer’s code.

Postgress probably best.


https://db-engines.com/en/

ACID compliant, global scalability, fully hosted.


Cockroach db from original developers of Google Cloud Spanner.

Firestore (more document-oriented)

Firestore has better integration with mobile libraries.

Bigtable more a flat table.

Realtime streaming: if ou manipulate data on the way through, use Cloud dataflow into storage.

For data into strainght into storage, Cloud storage is most costeffective.

For query, use bigtable or bigquery if there’s going to be analytics.

https://thecloudgirl.dev/CloudStorage.html

Comparing storage options

Transcode large video files: which service? Cloudstorage. But depends on aims, you might just be archiving.
Bigtable. Data is not necessarily blobs.

Openweathermap.org

Firestore: if someone else wants access, they can pay via their GCP account (exfil transferfees)

JSON weather query. This is a JSON document. This is ideal for Firestore.

Lab 3 cloud/cloud storage


Google Cloud Fundamentals: Getting Started with Cloud Storage and Cloud SQL
50 minutes
1 Credit
Overview
In this lab, you create a Cloud Storage bucket and place an image in it. You'll also configure an application running in
Compute Engine to use a database managed by Cloud SQL. For this lab, you will configure a web server with PHP, a
web development environment that is the basis for popular blogging software. Outside this lab, you will use
analogous techniques to configure these packages.

You also configure the web server to reference the image in the Cloud Storage bucket.

Objectives
In this lab, you learn how to perform the following tasks:

Create a Cloud Storage bucket and place an image into it.

Create a Cloud SQL instance and configure it.

Connect to the Cloud SQL instance from a web server.


Use the image in the Cloud Storage bucket on a web page.

Task 1: Sign in to the Google Cloud Platform (GCP) Console


For each lab, you get a new GCP project and set of resources for a fixed time at no cost.

Make sure you signed into Qwiklabs using an incognito window.

Note the lab's access time (for example, img/time.png and make sure you can finish in that time block.

There is no pause feature. You can restart if needed, but you have to start at the beginning.

When ready, click img/start_lab.png.

Note your lab credentials. You will use them to sign in to Cloud Platform Console. img/open_google_console.png

Click Open Google Console.

Click Use another account and copy/paste credentials for this lab into the prompts.

If you use other credentials, you'll get errors or incur charges.

Accept the terms and skip the recovery resource page.


Do not click End Lab unless you are finished with the lab or want to restart it. This clears your work and removes the
project.

Task 2: Deploy a web server VM instance


In the GCP Console, on the Navigation menu (Navigation menu), click Compute Engine > VM instances.

Click Create Instance.

On the Create an Instance page, for Name, type bloghost

For Region and Zone, select the region and zone assigned by Qwiklabs.

For Machine type, accept the default.

For Boot disk, if the Image shown is not Debian GNU/Linux 9 (stretch), click Change and select Debian GNU/Linux 9
(stretch).
Leave the defaults for Identity and API access unmodified.

For Firewall, click Allow HTTP traffic.

Click Management, security, disks, networking, sole tenancy to open that section of the dialog.

Enter the following script as the value for Startup script:

apt-get update
apt-get install apache2 php php-mysql -y
service apache2 restart
Be sure to supply that script as the value of the Startup script field. If you accidentally put it into another field, it
won't be executed when the VM instance starts.

Leave the remaining settings as their defaults, and click Create.


Instance can take about two minutes to launch and be fully available for use.

On the VM instances page, copy the bloghost VM instance's internal and external IP addresses to a text editor for
use later in this lab.

Click Check my progress to verify the objective.


Deploy a web server VM instance

Task 3: Create a Cloud Storage bucket using the gsutil command line
All Cloud Storage bucket names must be globally unique. To ensure that your bucket name is unique, these
instructions will guide you to give your bucket the same name as your Cloud Platform project ID, which is also
globally unique.

Cloud Storage buckets can be associated with either a region or a multi-region location: US, EU, or ASIA. In this
activity, you associate your bucket with the multi-region closest to the region and zone that Qwiklabs or your
instructor assigned you to.

On the Google Cloud Platform menu, click Activate Cloud Shell Activate Cloud Shell. If a dialog box appears, click
Continue.
For convenience, enter your chosen location into an environment variable called LOCATION. Enter one of these
commands:

export LOCATION=US
Or

export LOCATION=EU
Or

export LOCATION=ASIA
In Cloud Shell, the DEVSHELL_PROJECT_ID environment variable contains your project ID. Enter this command to
make a bucket named after your project ID:

gsutil mb -l $LOCATION gs://$DEVSHELL_PROJECT_ID


If prompted, click Authorize to continue.

Retrieve a banner image from a publicly accessible Cloud Storage location:

gsutil cp gs://cloud-training/gcpfci/my-excellent-blog.png my-excellent-blog.png


Copy the banner image to your newly created Cloud Storage bucket:

gsutil cp my-excellent-blog.png gs://$DEVSHELL_PROJECT_ID/my-excellent-blog.png


Modify the Access Control List of the object you just created so that it is readable by everyone:

gsutil acl ch -u allUsers:R gs://$DEVSHELL_PROJECT_ID/my-excellent-blog.png


Click Check my progress to verify the objective.
Create a Cloud Storage bucket using the gsutil command line

Task 4: Create the Cloud SQL instance


In the GCP Console, on the Navigation menu (Navigation menu), click SQL.

Click Create instance.

For Choose a database engine, select MySQL.

For Instance ID, type blog-db, and for Root password type a password of your choice.

Choose a password that you remember. There's no need to obscure the password because you'll use mechanisms to
connect that aren't open access to everyone.
Select Single zone and set the region and zone assigned by Qwiklabs.
This is the same region and zone into which you launched the bloghost instance. The best performance is achieved
by placing the client and the database close to each other.

Click Create Instance.


Wait for the instance to finish deploying. It will take a few minutes.

Click on the name of the instance, blog-db, to open its details page.

From the SQL instances details page, copy the Public IP address for your SQL instance to a text editor for use later in
this lab.

Click on Users menu on the left-hand side, and then click ADD USER ACCOUNT.

For User name, type blogdbuser

For Password, type a password of your choice. Make a note of it.

Click ADD to add the user account in the database.

Wait for the user to be created.

Click the Connections tab, and then click Add network.


If you are offered the choice between a Private IP connection and a Public IP connection, choose Public IP for
purposes of this lab. The Private IP feature is in beta at the time this lab was written.

The Add network button may be grayed out if the user account creation is not yet complete.

For Name, type web front end

For Network, type the external IP address of your bloghost VM instance, followed by /32

The result will look like this:

35.192.208.2/32
Be sure to use the external IP address of your VM instance followed by /32. Do not use the VM instance's internal IP
address. Do not use the sample IP address shown here.

Click Done to finish defining the authorized network.

Click Save to save the configuration change.


Note: If the message appears like Another operation is in progress, wait for few minutes until you see the green
check for blog-db to save the configuration.
Click Check my progress to verify the objective.
Create the Cloud SQL instance

Task 5: Configure an application in a Compute Engine instance to use Cloud SQL


On the Navigation menu (Navigation menu), click Compute Engine > VM instances.

In the VM instances list, click SSH in the row for your VM instance bloghost.

In your ssh session on bloghost, change your working directory to the document root of the web server:

cd /var/www/html
Use the nano text editor to edit a file called index.php:

sudo nano index.php


Paste the content below into the file:

<html>
<head><title>Welcome to my excellent blog</title></head>
<body>
<h1>Welcome to my excellent blog</h1>
<?php
$dbserver = "CLOUDSQLIP";
$dbuser = "blogdbuser";
$dbpassword = "DBPASSWORD";
// In a production blog, we would not store the MySQL
// password in the document root. Instead, we would store it in a
// configuration file elsewhere on the web server VM instance.

$conn = new mysqli($dbserver, $dbuser, $dbpassword);

if (mysqli_connect_error()) {
echo ("Database connection failed: " . mysqli_connect_error());
} else {
echo ("Database connection succeeded.");
}
?>
</body></html>
In a later step, you will insert your Cloud SQL instance's IP address and your database password into this file. For
now, leave the file unmodified.

Press Ctrl+O, and then press Enter to save your edited file.

Press Ctrl+X to exit the nano text editor.

Restart the web server:

sudo service apache2 restart


Open a new web browser tab and paste into the address bar your bloghost VM instance's external IP address
followed by /index.php. The URL will look like this:

35.192.208.2/index.php
Be sure to use the external IP address of your VM instance followed by /index.php. Do not use the VM instance's
internal IP address. Do not use the sample IP address shown here.

When you load the page, you will see that its content includes an error message beginning with the words:

Database connection failed: ...


This message occurs because you have not yet configured PHP's connection to your Cloud SQL instance.

Return to your ssh session on bloghost. Use the nano text editor to edit index.php again.

sudo nano index.php


In the nano text editor, replace CLOUDSQLIP with the Cloud SQL instance Public IP address that you noted above.
Leave the quotation marks around the value in place.

In the nano text editor, replace DBPASSWORD with the Cloud SQL database password that you defined above. Leave
the quotation marks around the value in place.

Press Ctrl+O, and then press Enter to save your edited file.

Press Ctrl+X to exit the nano text editor.

Restart the web server:


sudo service apache2 restart
Return to the web browser tab in which you opened your bloghost VM instance's external IP address. When you load
the page, the following message appears:

Database connection succeeded.


In an actual blog, the database connection status would not be visible to blog visitors. Instead, the database
connection would be managed solely by the administrator.

Task 6: Configure an application in a Compute Engine instance to use a Cloud Storage object
In the GCP Console, click Cloud Storage > Browser.

Click on the bucket that is named after your GCP project.

In this bucket, there is an object called my-excellent-blog.png. Copy the URL behind the link icon that appears in that
object's Public access column, or behind the words "Public link" if shown.

If you see neither a link icon nor a "Public link", try refreshing the browser. If you still do not see a link icon, return to
Cloud Shell and confirm that your attempt to change the object's Access Control list with the gsutil acl ch command
was successful.

Return to your ssh session on your bloghost VM instance.

Enter this command to set your working directory to the document root of the web server:

cd /var/www/html
Use the nano text editor to edit index.php:

sudo nano index.php


Use the arrow keys to move the cursor to the line that contains the h1 element. Press Enter to open up a new, blank
screen line, and then paste the URL you copied earlier into the line.

Paste this HTML markup immediately before the URL:


<img src='
Place a closing single quotation mark and a closing angle bracket at the end of the URL:

'>
The resulting line will look like this:

<img src='https://storage.googleapis.com/qwiklabs-gcp-0005e186fa559a09/my-excellent-blog.png'>
The effect of these steps is to place the line containing <img src='...'> immediately before the line containing
<h1>...</h1>

Do not copy the URL shown here. Instead, copy the URL shown by the Storage browser in your own Cloud Platform
project.

Press Ctrl+O, and then press Enter to save your edited file.

Press Ctrl+X to exit the nano text editor.

Restart the web server:

sudo service apache2 restart


Return to the web browser tab in which you opened your bloghost VM instance's external IP address. When you load
the page, its content now includes a banner image.

Congratulations!
In this lab, you configured a Cloud SQL instance and connected an application in a Compute Engine instance to it. You
also worked with a Cloud Storage bucket.

Containers in the cloud

Appengine standard is PaaS.


Appengine can also be kubed.

Using appengine as a scaling mechanism


Containers don’t give you 100% isolation

Container have all dependencies included.

Does appengine cache images (ie from image repository) when you want to run a container? Depends on underlying
Google platform.
Kubernetes and Google Kubernetes Engine

K has ditched docker, uses containerD now.


FROM statement needed because you build containers up in layers, and you need a starting point.
Eg start with Alpine and build a container image on this.
RUN statement inits the container

Config python & install requirements.txt

WORKDIR set working directory.

RUN more commands

COPY the app from current directory to the app directory.

RUN!
Py-server is a tag.

The final . is the current directory

Cloud run can launch a single container. But what about 100s of containers? Kubernetes.
POD is a group of containers launched on a specific node.
Kubernetes used by all cloud providers.
Released not long ago, useful if you don't want to get your hands too dirty managing Kubernetes

https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview

Re the AWS fargate question, the Google Cloud equivalent is Google CloudRun, you can also cache images with this
service using cloudbuild
Scale the pods behind the scenese. Declaritive using YAML.
Hybrid/Multi cloud Anthos
Anthos -management platform: run Kube on prem (GKE) AND cloud using same interface.

Per VCPU cost. $25/vcpu/month

BigQuery on-prem would be very nice

From Ryan Mills - Google to Everyone: 03:42 PM

Early days

https://cloud.google.com/blog/products/data-analytics/introducing-bigquery-omni

But this is the start of what is called BigQuery Omni, which will be hybrid BQ

running in containers on Anthos, effectively it is building a query engine

in said cloud

so cloud be in AWS or Azure

and we just run instances of BQ next to the data

Lightweight, no OS, portable, scalable, dev/test push to prod, loose coupling.


True. GKE is only in GC.

Can Anthos be used with other cloud provider’s k8s clusters...? Yep, can deploy it to manage AKS or EKS if you want

Container registry

Lab 4

Overview

In this lab, you create a Google Kubernetes Engine cluster containing several containers, each containing a web
server. You place a load balancer in front of the cluster and view its contents.

Objectives

In this lab, you learn how to perform the following tasks:

Provision a Kubernetes cluster using Kubernetes Engine.

Deploy and manage Docker containers using kubectl.

Task 1: Sign in to the Google Cloud Platform (GCP) Console

For each lab, you get a new GCP project and set of resources for a fixed time at no cost.

Make sure you signed into Qwiklabs using an incognito window.

Note the lab's access time (for example, img/time.png and make sure you can finish in that time block.

There is no pause feature. You can restart if needed, but you have to start at the beginning.

When ready, click img/start_lab.png.

Note your lab credentials. You will use them to sign in to Cloud Platform Console. img/open_google_console.png

Click Open Google Console.


Click Use another account and copy/paste credentials for this lab into the prompts.

If you use other credentials, you'll get errors or incur charges.

Accept the terms and skip the recovery resource page.

Do not click End Lab unless you are finished with the lab or want to restart it. This clears your work and removes the
project.

Task 2: Confirm that needed APIs are enabled

Make a note of the name of your GCP project. This value is shown in the top bar of the Google Cloud Platform
Console. It will be of the form qwiklabs-gcp- followed by hexadecimal numbers.

In the GCP Console, on the Navigation menu (Navigation menu), click APIs & Services.

Scroll down in the list of enabled APIs, and confirm that both of these APIs are enabled:

Kubernetes Engine API

Container Registry API

If either API is missing, click Enable APIs and Services at the top. Search for the above APIs by name and enable each
for your current project. (You noted the name of your GCP project above.)

Task 3: Start a Kubernetes Engine cluster

In GCP console, on the top right toolbar, click the Activate Cloud Shell button.

Cloud Shell icon


Click Continue. cloudshell_continue.png

For convenience, place the zone that Qwiklabs assigned to you into an environment variable called MY_ZONE. At the
Cloud Shell prompt, type this partial command:

export MY_ZONE=

followed by the zone that Qwiklabs assigned to you. Your complete command will look similar to this:

export MY_ZONE=us-central1-a

Start a Kubernetes cluster managed by Kubernetes Engine. Name the cluster webfrontend and configure it to run 2
nodes:

gcloud container clusters create webfrontend --zone $MY_ZONE --num-nodes 2

It takes several minutes to create a cluster as Kubernetes Engine provisions virtual machines for you.

After the cluster is created, check your installed version of Kubernetes using the kubectl version command:

kubectl version

The gcloud container clusters create command automatically authenticated kubectl for you.

View your running nodes in the GCP Console. On the Navigation menu (Navigation menu), click Compute Engine >
VM Instances.
Your Kubernetes cluster is now ready for use.

Click Check my progress to verify the objective.

Start a Kubernetes Engine cluster

Task 4: Run and deploy a container

From your Cloud Shell prompt, launch a single instance of the nginx container. (Nginx is a popular web server.)

kubectl create deploy nginx --image=nginx:1.17.10

In Kubernetes, all containers run in pods. This use of the kubectl create command caused Kubernetes to create a
deployment consisting of a single pod containing the nginx container. A Kubernetes deployment keeps a given
number of pods up and running even in the event of failures among the nodes on which they run. In this command,
you launched the default number of pods, which is 1.

Note: If you see any deprecation warning about future version you can simply ignore it for now and can proceed
further.

View the pod running the nginx container:

kubectl get pods

Expose the nginx container to the Internet:

kubectl expose deployment nginx --port 80 --type LoadBalancer


Kubernetes created a service and an external load balancer with a public IP address attached to it. The IP address
remains the same for the life of the service. Any network traffic to that public IP address is routed to pods behind the
service: in this case, the nginx pod.

View the new service:

kubectl get services

You can use the displayed external IP address to test and contact the nginx container remotely.

It may take a few seconds before the External-IP field is populated for your service. This is normal. Just re-run the
kubectl get services command every few seconds until the field is populated.

Open a new web browser tab and paste your cluster's external IP address into the address bar. The default home
page of the Nginx browser is displayed.

Scale up the number of pods running on your service:

kubectl scale deployment nginx --replicas 3

Scaling up a deployment is useful when you want to increase available resources for an application that is becoming
more popular.

Confirm that Kubernetes has updated the number of pods:

kubectl get pods

Confirm that your external IP address has not changed:

kubectl get services


student_01_9ab90fef2a7d@cloudshell:~ (qwiklabs-gcp-00-e99eb34d561e)$ kubectl scale
deployment nginx --replicas 3
deployment.apps/nginx scaled
student_01_9ab90fef2a7d@cloudshell:~ (qwiklabs-gcp-00-e99eb34d561e)$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-674c77bcbb-dclg4 1/1 Running 0 2m35s
nginx-674c77bcbb-nxkpg 1/1 Running 0 10s
nginx-674c77bcbb-pnlqt 1/1 Running 0 10s
student_01_9ab90fef2a7d@cloudshell:~ (qwiklabs-gcp-00-e99eb34d561e)$ kubectl get
services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.3.240.1 <none> 443/TCP 5m4s
nginx LoadBalancer 10.3.249.207 35.189.7.91 80:30604/TCP 2m19s
student_01_9ab90fef2a7d@cloudshell:~ (qwiklabs-gcp-00-e99eb34d561e)$

Return to the web browser tab in which you viewed your cluster's external IP address. Refresh the page to confirm
that the nginx web server is still responding.

Click Check my progress to verify the objective.

Run and deploy a container

Congratulations!

In this lab, you configured a Kubernetes cluster in Kubernetes Engine. You populated the cluster with several pods
containing an application, exposed the application, and scaled the application.

Applications in the cloud

Take source & just dump on appengine

Standard or Flexible
Standard is more cost -effective: free daily quota. But restricted: your code running in a google runtime container.
You can put anything in the container

Flexible can connect to a VPC.


Cloud endpoints and apigee
Cloud Run

Create your own containers (code/image), fewer sandbox restrictions, greater no. of languages, private network
access (VPC).

Edge allows monetisation/secure.

Lab 5
Google Cloud Fundamentals: Getting Started with App Engine
20 minutes
1 Credit
Overview
In this lab, you create and deploy a simple App Engine application using a virtual environment in the Google Cloud
Shell.

Objectives
In this lab, you learn how to perform the following tasks:

Initialize App Engine.

Preview an App Engine application running locally in Cloud Shell.

Deploy an App Engine application, so that others can reach it.

Disable an App Engine application, when you no longer want it to be visible.

Set up your lab environment


For each lab, you get a new GCP project and set of resources for a fixed time at no cost.

Make sure you signed into Qwiklabs using an incognito window.

Note the lab's access time (for example, img/time.png and make sure you can finish in that time block.

There is no pause feature. You can restart if needed, but you have to start at the beginning.

When ready, click img/start_lab.png.

Note your lab credentials. You will use them to sign in to Cloud Platform Console. img/open_google_console.png

Click Open Google Console.

Click Use another account and copy/paste credentials for this lab into the prompts.

If you use other credentials, you'll get errors or incur charges.

Accept the terms and skip the recovery resource page.


Do not click End Lab unless you are finished with the lab or want to restart it. This clears your work and removes the
project.

Activate Google Cloud Shell


Google Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home
directory and runs on the Google Cloud. Google Cloud Shell provides command-line access to your GCP resources.
In GCP console, on the top right toolbar, click the Open Cloud Shell button.

Cloud Shell icon

Click Continue. cloudshell_continue.png

It takes a few moments to provision and connect to the environment. When you are connected, you are already
authenticated, and the project is set to your PROJECT_ID. For example:

Cloud Shell Terminal

gcloud is the command-line tool for Google Cloud Platform. It comes pre-installed on Cloud Shell and supports tab-
completion.

You can list the active account name with this command:

gcloud auth list


Output:

Credentialed accounts:
- <myaccount>@<mydomain>.com (active)
Example output:

Credentialed accounts:
- google1623327_student@qwiklabs.net
You can list the project ID with this command:

gcloud config list project


Output:

[core]
project = <project_ID>
Example output:

[core]
project = qwiklabs-gcp-44776a13dea667a6
Full documentation of gcloud is available on Google Cloud gcloud Overview .
Task 1: Initialize App Engine
Initialize your App Engine app with your project and choose its region:
gcloud app create --project=$DEVSHELL_PROJECT_ID
When prompted, select the region where you want your App Engine application located.

Clone the source code repository for a sample application in the hello_world directory:

git clone https://github.com/GoogleCloudPlatform/python-docs-samples


Navigate to the source directory:

cd python-docs-samples/appengine/standard_python3/hello_world
Task 2: Run Hello World application locally
In this task, you run the Hello World application in a local, virtual environment in Cloud Shell.

Ensure that you are at the Cloud Shell command prompt.

Execute the following command to download and update the packages list.

sudo apt-get update


Set up a virtual environment in which you will run your application. Python virtual environments are used to isolate
package installations from the system.

sudo apt-get install virtualenv


If prompted [Y/n], press Y and then Enter.
virtualenv -p python3 venv
Activate the virtual environment.

source venv/bin/activate
Navigate to your project directory and install dependencies.

pip install -r requirements.txt


Run the application:

python main.py
Please ignore the warning if any.
In Cloud Shell, click Web preview (Web Preview) > Preview on port 8080 to preview the application.
To access the Web preview icon, you may need to collapse the Navigation menu.

Result:

hello_world.png
To end the test, return to Cloud Shell and press Ctrl+C to abort the deployed service.

Using the Cloud Console, verify that the app is not deployed. In the Cloud Console, on the Navigation menu
(Navigation menu), click App Engine > Dashboard.

Notice that no resources are deployed.

Task 3: Deploy and run Hello World on App Engine


To deploy your application to the App Engine Standard environment:

Navigate to the source directory:

cd ~/python-docs-samples/appengine/standard_python3/hello_world
Deploy your Hello World application.

gcloud app deploy


If prompted "Do you want to continue (Y/n)?", press Y and then Enter.
This app deploy command uses the app.yaml file to identify project configuration.

Launch your browser to view the app at http://YOUR_PROJECT_ID.appspot.com

gcloud app browse


Copy and paste the URL into a new browser window.

Result:

disable-app.png

Congratulations! You created your first application using App Engine.

Click Check my progress to verify the objective.


Deploy the Hello World application to App Engine
Task 4: Disable the application
App Engine offers no option to Undeploy an application. After an application is deployed, it remains deployed,
although you could instead replace the application with a simple page that says something like "not in service."

However, you can disable the application, which causes it to no longer be accessible to users.

In the Cloud Console, on the Navigation menu (Navigation menu), click App Engine > Settings.

Click Disable application.

Read the dialog message. Enter the App ID and click DISABLE.

If you refresh the browser window you used to view to the application site, you'll get a 404 error.

f17c85cf862ddae3.png
Only one app per project it seems.
https://cloud.google.com/appengine/docs/standard/python/console/ states:
"A Google Cloud project contains your App Engine application" so seems to be only 1

Re App Engine quesiton around number of Apps per project:


One app per project, but more than one service. After you deploy the default service, you can then deploy any
number of additional services with the service:name field in the app.yaml.
https://cloud.google.com/appengine/docs/standard/nodejs/an-overview-of-app-engine
Link, helps describe this well process well

I am in the similar boat to Joel and I am off now. Grant, Thank you very much for the course . Kept me engaged the
whole day. My only recommendation is that google should flip the course content. VMs/Storage almost ubiquitous
now. Very similar on AWS/Azure. However the GCP container, PaaS and API stuff extremely interesting and far less
clunky than AWS. I will likely only log on tonight to do the rest of the labs as they were so very interesting.

Command line is emphasised.

Developing /deploying /monitoring in the cloud


<cloud debugger is the big thing here

like Azure functions/AWS lambda

Event-based execution model


Once very large nos of functions, continuous compute is more cost effective.

Functions NOT a low-latency solution

https://cloud.google.com/functions/pricing
Infrastructure as code

Console: deployment manager

Open source deployment TerraForm.io

https://www.terraform.io/

Works with other providers too.

https://www.pulumi.com/ also infrastructure as code.

Monitoring
Machine learning
Multi-cloud apps just have GCP for BigQuery
Auto ML saves you from the standard ML process:
AutoML does most of this.
Pre-trained models

Some other examples of AI models you can try

https://cloud.google.com/document-ai/docs/drag-and-drop

https://cloud.google.com/text-to-speech

You might also like