You are on page 1of 3

All right, so this is module two, Managing

Hybrid Clusters using Kubernetes engine. In this module we will talk


about Anthos compute layer. How do we create a compute and continuously across
different environments? And then we will talk about
GKE on-premise architecture. How do you install it, what are the
different mechanisms behind the scene? And then we will talk about network
connectivity between the two environments. So containers and
Kubernetes as a compute layer. So the idea here is that the industry
kind of adopted containers as the defacto way for
us to create portable workloads now. The idea of a container accommodating
only the business logic dependencies and runtime in this very lightweight,
deployable unit that can move anywhere, is a very appealing idea that
a lot of people adopted. And as you have loads and loads and
loads and loads and loads of containers, you start having an orchestrator that
needs to take care of all of them, right? And this is mostly the Kubernetes section
or the prerequisites for this course. Containers also mature to not
become only stateless applications that was their main use
case in the beginning. But the functionality has been extended a
lot, so now, you can have a stateful set. You can have access to your hardware and
a lot of other things and it seems like there is a lot
of development in that area. So it opens a lot of business use
cases as the product matures. And it basically encompasses a lot
of business needs in a container. So solutions for running containers, I'm going to
talk a little bit about
GKE and then move into GKE On-Prem. So GKE is basically providing you
with a fully managed solution. And with the fully managed solution, what you will
get is you will
get a Kubernetes as a Service. The master of the control plane is
completely abstracted away from you. So you don't have to install Kubernetes. And
the idea here is that it's just really
difficult to manage it and to install it. And if you want you can basically have a
turn key solution in a cloud provider that provides you with all the functionality
and manages the master and the nodes under them. So you don't have to worry about
virtual machines, patches, upgrades. All of those things are fully managed for
you by the cloud provider and provides you with an SLA,
which is really cool. And it also provides you with a lot
of benefits from the cloud, so benefit from the cloud including
also high availability. So you can have a regional cluster,
so you have a cluster in each and every zone in a region. So in case one of the
zone fails,
you have a failover as well and they all kind of operate
in a very seamless way. Google owned Kubernetes for
a long time and then open sourced it and now it is a part of the Linux
foundation and it's owned by them. And we basically run all
of our infrastructure and all of our GKE infrastructure with
a certificate from the Linux foundation. So we have a certified
Kubernetes configuration with best practices that we
developed over the years. As you know, Kubernetes basically allows
you to have loads, and loads, and loads, and loads, and loads of configuration and
ways to orchestrate your workloads. And we basically automate all
of those things for you, so that is GKE in a nutshell. And GKE On-Prem basically
provides you,
again, with a turn-key production-grade Kubernetes with
best practices configuration. So what we do is we ship to you a virtual
appliance into your on-premise and you run this automated
solution in your on-premise. So we’re trying to provide you
with the same experience or as close of experience as possible to what
you get in the cloud in your own premise. It basically, provides you with an easy
upgrade path
to the latest Kubernetes releases. If any of you try to upgrade your
existing workloads with the open source, you would know that it is a bit tricky. It
sometimes demands some downtime and it is fault you can you can have a lot
of places where things can go wrong. And what we provide you with is we
provide you with an opinionated release that automatically
will be upgraded for you. It is based on a software stack, so
you don't have to buy any hardware. And this is a really important thing for
Google, is that you don't have to buy any hardware into your own premise and
use our hardware. You can basically,
it's based on a software stack. We run on your vSphere currently and
perhaps on other platforms in the future. And you can use that consistency
model across your cloud and your on-premise based on a software stack,
rather than buying a physical hardware. And that is one of
the really good advantages. And integrated and it speaks nicely with all the rest
of
the Google solutions like GCP container, Cloud container services, cloud build,
registry logging, monitoring etc etc. So with GK and GKE On-Prem, you can use any
type of an identity
that you choose, right? We want to make sure that when we extend
the environment into your on-premise, we give you the choices, right? You have long
standing workloads and
configuration in your on-premise and we want to play nice with all
of them as much as possible. So for instance, in terms of identity,
you can choose your identity providers. If you choose to use, let's say, any
of your on-premises identity providers. You can also choose Cloud Identity, which
is Google identity
as a service in the cloud. But you can adopt really whatever you're
using already in your on-premise and we can meet you there. So if you want your
identity piece to be
managed by yourself, you can do that. You can create a secure connection across
the environments without the need for complicated VPN. And this is talking
about the control plane. So the control plane, the way that we
control the different clusters and recreate that orchestration across from
them, does not really need a VPN or a physical connection to
your Google Data Center. And we will talk about how it
is implemented in a second. And one of the really cool
features that you would get, [COUGH] is you would also get
a multi-cluster dashboard. And we will talk about that in a second,
but let's talk about different case scenarios. So what you will find is
that a lot of developers and a lot of operators will find that
clusters are very, very useful. And therefore,
you will find loads and loads and loads of clusters being spinned up. And there are
many use cases for that, for
instance, you would want a cloud bursting. And cloud bursting is one of the use
cases where you have your on-premises, let's say, frontend. And you would know that
maybe next to
Black Friday, you'll have surge in demand. And in order for
you to accommodate that demand, you would have to buy a lot of machines. But it's
just a subset, it's just a few months in the year
that you need that extra power. So what you can do is you can
create this hybrid connection and you can then deploy a lot of workloads to
kind of go over that spike with the cloud. To kind of accommodate that spike and to
be able to provide you
with that burst of compute. Another one is cross
environmental execution. So you have maybe development and
testing in the cloud, but you have production in your on-premise. Invoking legacy
dependencies. So maybe you have your frontend in the
cloud, because that's kind of stateless. And it's okay for
us to run it in the cloud. And we can create a really nice deployment
where you have a lot of workloads across the globe. And you can cache your
data closer to the users. But maybe you're using an or some information that you
have
to keep in your on-premise. And so that frontend can
communicate without and do that, maybe you have some data
sovereignty requirements. So you have to keep all your data that's
in Europe or in some form of way. And you need to keep it
in your on-premise and that can be another case scenario for
a hybrid connection. And maybe multi-side deployment as well,
so you want to have that global reach. So GKE dashboard over here
is what you will find and hopefully this looks familiar to you. But also you can
see here that you
have an on-premise cluster as well. So you're able to see from the same
dashboard that you're used to NGKE, you can manage your GKE On-Prem
from that same dashboard. And that is a very clever thing to do. And you can also,
not only on your on-premise, but if you have multiple clusters from
multiple places in multiple locations, you can register them
in the GKE dashboard. And you will be able to access
all of your workloads and manage them from a unified place in
the same place, in the same manner, in the same graphical user interface. So that
your operators or your developers
do not need to learn new tools, or keep control context every single time they
need to move from one place to another. Because without it, you can maybe use,
imagine that you use Kubernetes dashboard. And you know you need to secure
that Kubernetes dashboard, because we know it's very, very insecure. But then you
don't even have a single
place where you can go, yeah, all the system looks the same. You have the same
dashboard,
but you have to go to each and every dashboard to look
into your environment. What you have here is you
have that same place. You can go to the same place and you can have all of your
registered
clusters in one place, just really nice

You might also like