You are on page 1of 11

Anthos is built on the firm foundation of Google Kubernetes Engine (GKE), the

managed containers as a service offering on Google Cloud Platform.


Core building blocks of Anthos
Google Kubernetes Engine
GKE On-prem
Istio
Velostrata
Anthos Config Management
Stackdriver
GCP Cloud Interconnect
GCP Marketplace
https://www.forbes.com/sites/janakirammsv/2019/04/14/everything-you-want-to-know-
about-anthos-googles-hybrid-and-multi-cloud-platform/#21f0dbd65b66

Anthos solution components

Core Anthos Core Function Public Cloud Component


On-Premises Component

Google Kubernetes Engine Managed Kubernetes on Google


GKE on-prem version 1.0
(GKE) for container Compute Platform (GCP)
orchestration

Multicluster Management Via GCP console and control plane


Via GCP console and control plane

Configuration Management Anthos Config Management (1.0)


Anthos Config Management(1.0)

VM migration to containers Migrate for Anthos (Beta)


N/A

Service Mesh Istio on GKE Traffic Director


Istio OSS (1.1.7)

Logging & Monitoring Stackdriver Logging, Stackdriver Monitoring,


Stackdriver for system components
alerting

Container Marketplace Kubernetes Applications in GCP Marketplace


Kubernetes Applications in GCP Marketplace

Anthos Config Management


Anthos Config Management to automatically pick up the policy change from the
repository and adjust the Anthos Service Mesh policy.
The nomos command is a command-line tool that lets you interact with the
Config Management Operator and perform other useful Anthos Config Management tasks
To verify that Anthos Config Management is properly installed and configured
on your cluster, run nomos status

Setup Config management (GCP Console)


https://medium.com/google-cloud/google-cloud-anthos-series-part5-a17ce89ddc7a
Select registered clusters for config management
Policy Controller
Evaluates changes to the cluster configuration and enforces the
compliance with operational or security controls
It supports both continuous auditing and active blocking of
changes using a built-in library of common policies or your own rules.
Configurations
Enable policy controller
Config Sync
Continuosly reconciles the state of your cluster with a central
set of configuration stored in one or more Git repositories
Enable Config Sync
Repository
Custom
URL
htpps://source.developers.google.com/ ...
Authentication Type
Google Cloud Repository
Branch
master
tag/commit
Default HEAD
Policy directory
/allpolicies
Sync wait
Period in secs
Period between consecutive syncs
Protocol
Git proxy
Must be valid URL if no protocol supplied, defaults to
Https
Source Format
Unstructured
Config Management version
If the policy yaml in configured repository is changed and pushed the
configuration will reflect automatically

Anthos Service Mesh


https://medium.com/@kasiarun/demystifying-istio-and-anthos-service-mesh-on-gke-
ff009c7c5179
Anthos Service Mesh is a suite of tools that helps you monitor and manage a
reliable service mesh on-premises or on Google Cloud. Anthos Service Mesh is
Google’s fully-supported distribution
of Istio. Because Anthos Service Mesh is compatible with the Istio APIs, it
provides all the benefits of the Istio service mesh and more.
Benefits
Traffic Management
Create canary and blue-green deployments.
Provide fine-grained control over specific routes for services.
Configure load balancing between services.
Set up circuit breakers.
Observability insights
The Anthos Service Mesh pages in the Google Cloud Console provide
the following insights into your service mesh:
Service metrics and logs for HTTP traffic within your mesh’s GKE
cluster are automatically ingested to Google Cloud.
Preconfigured service dashboards give you the information you
need to understand your services.
In-depth telemetry — powered by Cloud Monitoring, Cloud Logging,
and Cloud Trace — lets you dig deep into your service metrics and logs. You can
filter and slice your data
on a wide variety of attributes.
Service-to-service relationships at a glance help you understand
who connects to each service and the services that each service depends on.
Service level objectives (SLOs) give you insight into the health
of your services. You can easily define an SLO and alert on your own standards of
service health.
Security benefits
Mitigates risk of replay or impersonation attacks that use stolen
credentials. Anthos Service Mesh relies on mutual TLS (mTLS) certificates to
authenticate peers, rather than bearer
tokens such as JSON Web Tokens (JWT).
Ensures encryption in transit. Using mTLS for authentication also
ensures that all TCP communications are encrypted in transit.
Ensures that only authorized clients can access a service with
sensitive data, irrespective of the network location of the client and the
application-level credentials.
Mitigates the risk of user data breach within your production network.
You can ensure that insiders can only access sensitive data through authorized
clients.
Deployment options
In-cluster control plane
Managed Anthos Service Mesh
Include Compute Engine VMs in the service mesh.

Anthos Service Mesh Dashboard


Navigation Menu -> Anthos -> Dashboard
Click Service Mesh
services Table displayed
You should be able to view service wise dashboard and get a
Topology Metrics.
You can explore the dashboard with different services.
Use the Topology view to better visualize your mesh

Anthos Setup
https://cloudsolutions.academy/solution/anthos-setup/
Prerequisites
Google project with billing enabled
Anthos APIs are enabled. It will allow you to use Anthos features
The command line tool – gcloud. You will get this by installing Cloud
SDK
Create Service Account
gcloud iam service-accounts create gke-anthos --project=${PROJECT_ID}
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member="serviceAccount:gke-anthos@$
{PROJECT_ID}.iam.gserviceaccount.com" \
--role="roles/owner"
gcloud iam service-accounts keys create gke-anthos-key.json \
--iam-account=gke-anthos@${PROJECT_ID}.iam.gserviceaccount.com \
--project=${PROJECT_ID}
Granting owner role is not recommended and the principle of least
privilege should be followed when granting permissions to users or service
accounts.
Create a GKE cluster
Registering the cluster
https://cloud.google.com/sdk/gcloud/reference/container/hub/memberships/
register
Register a GKE cluster
gcloud container hub memberships register anthos --gke-
cluster=asia-southeast1-a/anthos-cluster --service-account-key-file=
The above command registers the anthos-cluster cluster and
creates a membership named anthos.
It will install the Connect Agent on the cluster that will
enable you to view
and manage your cluster from the Anthos dashboard.
The Connect Agent will authenticate to Google using the
above created service account JSON key.
Registering a cluster indicates that the cluster is now in
the realm of Anthos ecosystem.
Register a non-GKE or GKE On-Prem cluster referenced from a specific
kubeconfig file, and install the Connect Agent:
gcloud container hub memberships register my-cluster --
context=my-cluster-context --kubeconfig=/home/user/custom_kubeconfig --service-
account-key-file=/tmp/keyfile.json
Register EKS cluster
gcloud container hub memberships register aws --
context=arn:aws:eks:ap-south-1:798531306129:cluster/aws-cluster --
kubeconfig=~/.kube/config \
--service-account-key-file=gke-anthos-key.json
Setting up Anthos Service Mesh (ASM)
prerequisites
git
kpt
kubectl
jq
curl https://storage.googleapis.com/csm-artifacts/asm/install_asm_1.10
> install_asm
./install_asm \
--project_id PROJECT_ID \
--cluster_name anthos-cluster \
--cluster_location asia-southeast1-a \
--mode install \
--ca mesh_ca \
--output_dir \
--enable_all
The Certificate Authority used is Mesh CA. Mesh CA is a Google
managed private certificate authority that issues certificates for mutual TLS
authentication within the service
mesh.
The output_dir is where the script downloads the asm packages and
Istio command line tool istioctl that can be used to customize the installation and
also to debug and
diagnose the mesh.
The enable_all option enables the required Google APIs needed to
install ASM and set IAM permissions.
Enable the Sidecar Proxy
ASM provides mesh functionality through the use of sidecar
containers or envoy proxies or proxy containers.
The sidecar container runs alongside your primary container in
the same pod.
As part of ASM installation, you have to enable the automatic
injection of sidecar proxy.
It effectively means you have to label your namespaces with the
above ASM revision tag that will spin up proxy container alongside your main
container as part of that
namespace.
kubectl label namespace <namespace> istio-injection-
istio.io/rev=<asm-revision> --overwrite
The above command will ensure that your pods will now have
a mesh proxy container alongside its primary container.
If the pods are already running, then you have to restart
the pods to trigger the injection.
https://cloudsolutions.academy/solution/anthos-asm-cross-cluster-mesh-
with-different-network/
Prerequisites
You have already setup a Google project with two VPCs in
separate regions.
You can opt for any available two regions to implement this
use case. For this article we will take Mumbai as one region and Singapore as
another.
You have Anthos GKE cluster setup in both the VPCs along
with ASM version 1.8.3 installed.
You will also have to install istio operator that will
represent ingress gateway to serve the east-west traffic across two clusters.
See section Set up east-west gateway below.
The ASM Certificate Authority (CA) used will be Mesh CA
(only available for GKE clusters).
You could also use Citadel CA as an alternate option.
Set the cluster context
kubectl config get-contexts -o name
gke_sandbox-111111_asia_south1-a_cluster-1
gke_sandbox-111111_asia_southeast1-a_cluster-2
export ctx1=gke_sandbox-111111_asia_south1-a_cluster-1
export ctx2=gke_sandbox-111111_asia_southeast1-a_cluster-2
Setup endpoint discovery between clusters
In this step you will enable each cluster to discover
service endpoints of their counterpart, so that cluster one will discover service
endpoints of the second cluster
and vice versa.
istioctl x create-remote-secret --context=$ctx1 --
name=cluster-1 | \
kubectl apply -f - --context=$ctx2
istioctl x create-remote-secret --context=$ctx2 --
name=cluster-2 | \
kubectl apply -f - --context=$ctx1
You enable this by creating secrets for each cluster that
grants access to kube API server of that cluster.
Each secret is the certificate derived from the common root
CA, in this case Mesh CA.
You then apply the secret to the other cluster.
In that way secrets are exchanged and the clusters are able
to see the service endpoints of each other.
In this case the endpoint of ingress gateway that will
serve the east-west traffic will be discovered.
Set up east-west gateway
Before installing ASM, create an istio operator config file
in both the clusters which will set up an ingress gateway for east-west traffic
(cross-cluster).
The Mumbai cluster will represent west side of the traffic
while the Singapore cluster will represent east side of the traffic.
The config file for the Mumbai cluster will look like the
following:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster-1
network: vpc-1
components:
ingressGateways:
- name: istio-eastwestgateway
label:
istio: eastwestgateway
app: istio-eastwestgateway
topology.istio.io/network: vpc-1
enabled: true
k8s:
env:
- name: ISTIO_META_ROUTER_MODE
value: "sni-dnat"
- name: ISTIO_META_REQUESTED_NETWORK_VIEW
value: vpc-1
service:
ports:
- name: status-port
port: 15021
targetPort: 15021
- name: tls
port: 15443
targetPort: 15443
- name: tls-istiod
port: 15012
targetPort: 15012
- name: tls-webhook
port: 15017
targetPort: 15017
You can apply the above configuration as an overlay file
while installing ASM using the -f flag
istioctl install --context=$ctx1 -f istio-ingress-
eastwest.yaml
Important thing to note is the port 15443 which is used for
cross cluster communication.
In order to expose this port for mesh services to
communicate, you will have to create a custom gateway that will use Service Name
Indication (SNI) routing with TLS
extension
on this port.
The below code depicts the custom gateway that will
actually facilitate east-west routing.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: multi-network
namespace: istio-system
spec:
selector:
istio: eastwestgateway
servers:
- port:
number: 15443
name: tls
protocol: TLS
tls:
mode: AUTO_PASSTHROUGH
hosts:
- "*.local"
The above gateway named multi-network exposes port 15443
using SNI (with TLS extension) based routing allowing Kubernetes services
with .local domain to communicate
with Mumbai cluster mesh via this gateway.
The TLS mode used is AUTO_PASSTHROUGH.
This will, in effect, not terminate the TLS
connection at the gateway and instead pass on the request to the target backend
service.
This mode also assumes that the client or the service from
the calling cluster has initiated a TLS handshake.
So if the service from the Singapore cluster uses plain
vanilla http:// and not https://, the gateway will still pass through the request.
This can create security loophole if the target backend
does not handle TLS termination.
We have kept it simple, as the purpose of this use case is
to simply demonstrate cross cluster communication between two VPCs.
Apply both the above config file in the Singapore cluster
(just change the name of the network) and you should be all set.
https://cloudsolutions.academy/solution/anthos-asm-multi-cluster-
concept/
Anthos Service Mesh (ASM) supports federated mesh where every cluster
will host both mesh control and data plane components.
Multi cluster service mesh (single VPC network)
In this mesh network topology, two clusters will be part of
same VPC and there will be direct connectivity between services in the mesh across
clusters.
Both the clusters must be setup to trust each other.
This can be achieved by exchanging secrets (certificate
derived from common root CA) of the other cluster thereby enabling access to
Kubernetes (GKE) API servers.
In effect, cluster one will be able to access API server
and discover endpoints of the second cluster and vice versa.
As both the clusters are part of same VPC, there is no need
of routing via ingress gateway. Service enpoints in both the clusters can directly
communicate with each
other.
Multi cluster service mesh (different VPC networks)
In this mesh network topology, two clusters will be set up
in different VPC networks. Services in a mesh cannot communicate directly across
clusters but have to make
use of gateway to route the east-west traffic.
To implement this model, you have to again set up trust
between clusters, as discussed in the previous section, to enable endpoints
discovery across clusters.
You then have to setup an ingress gateway that will allow
east-west traffic in both the clusters.
The gateway endpoint will be accessible over public
internet but will expose only services with *.local domain.
This will ensure services endpoints from both the clusters
will be able to communicate via this gateway thereby enabling east-west traffic.
The gateway will also ensure that only mTLS enabled
services are able to communicate with each other thereby making sure that services
are indeed part of the
recognized clusters.
Recalls
Register cluster
Alternate
(https://cloud.google.com/anthos/fleet-management/docs/fleet-creation)
Fleet host project
Anthos clusters on VMware and Anthos clusters on bare
metal are automatically registered to your chosen fleet at cluster creation time,
with your fleet host project and other registration
details specified in the relevant cluster configuration file.
Manual registeration
gcloud container fleet memberships register
MEMBERSHIP_NAME \
--context=KUBECONFIG_CONTEXT \
--kubeconfig=KUBECONFIG_PATH \
--service-account-key-
file=SERVICE_ACCOUNT_KEY_PATH
Enable sidecar proxy
Istio injection
Certificate CA
mesh ca (For GKE only)
citadel ca
Exchange secrets between clusters
Eastwest gateway
IstioOperator
ISTIO_META_ROUTER_MODE
"sni-dnat"
Port 15443
is used for cross cluster communication
Gateway
tls mode
Auto PAsstrhough
hosts
*.local

Setting up Anthos Config Management (ACM)


Prerequisites
Any known source code repo like Git or Google Cloud Source
Repository (this will be used by Config Sync to sync with clusters)
nomos client tool
Setting up Config Sync
Config Sync lets you deploy configurations and security policies
consistently across multiple Kubernetes clusters and namespaces spanning hybrid and
multi-cloud environments.
The config files are stored as part of source repository like Git
and its current state in the repo is synced with multiple clusters where Config
Sync is enabled.
gcloud beta container hub config-management enable
download and apply the Custom Resource Definition (CRD) manifest
that will represent the Config Sync operator resource.
gsutil cp
gs://config-management-release/released/latest/config-management-operator.yaml
config-management-operator.yaml
The above command will download the CRD manifest file. You
will then apply the said manifest file.
kubectl apply -f config-management-operator.yaml
You will provide Config Sync operator with the read-only access
to the source code repo.
The way you do this is by setting up the authentication type.
If your repo allows read-only access without any authentication
then you do not have to do anything special and just specify ‘none’ as
authentication type.
The following authentication types are supported:
SSH key pair
cookiefile (only supported by Google Cloud Source
Repositories)
Token based
Google service account (only supported by Google Cloud
Source Repositories)
Most repos will support SSH based authentication. It is universal
and recommended way of authenticating with a repo.
Create a SSH key pair
ssh-keygen -t rsa -b 4096 \
-C "<repo-user-name>" \
-N '' \
-f <path-to-key-file>
The user name is the one that will be used by Config Sync
operator to authenticate with the repo.
Register the public key with your source code repo
Note: Config Sync does not support SSH key passphrase
Create a Kubernetes Secret that will contain your private key.
The Secret must be created in the config-management-system
namespace and named as git-creds.
kubectl create ns config-management-system && \
kubectl create secret generic git-creds \
--namespace=config-management-system \
--from-file=ssh=<path-to-private-key>
Create a custom resource named ConfigManagement as defined by the
above created Config Sync operator CRD and apply it to the cluster.
This will allow us to tune or configure the behaviour of the
Config Sync.
# config-management.yaml
apiVersion: configmanagement.gke.io/v1
kind: ConfigManagement
metadata:
name: config-management
spec:
cluster: anthos-cluster
sourceFormat: unstructured
git:
syncRepo: <repo-url>
secretType: ssh
The above configuration enables Config Sync for our cluster. You
must specify the URL of the repo and the secret type as SSH as we used the same as
our authentication type.
kubectl apply -f config-management.yaml
Use the tool nomos to verify the Config Sync installation.
nomos status
The above command will validate if the Config Sync
operator is installed successfully or not.
A status of PENDING or SYNCED indicates a successful
installation.
Setting up Policy Controller
Policy Controller is a way to audit and enforce the compliance
for your cluster through a well defined programmable policies.
These policies are also called as guardrails that lay down the
rules which guards the configuration of the resource against any changes or updates
that may indicate or reflect a
security violation.
Change the above created config-management.yaml file to enable
the Policy Controller.
apiVersion: configmanagement.gke.io/v1
kind: ConfigManagement
metadata:
name: config-management
spec:
cluster: anthos-cluster
sourceFormat: unstructured
policyController:
enabled: true
git:
syncRepo: <repo-url>
secretType: ssh
Verify the Policy Controller installation.
gcloud beta container hub config-management status --
project=PROJECT_ID
Output:
Name Status Last_Synced_Token Sync_Branch
Last_Synced_Time Policy_Controller
anthos-cluster SYNCED a687c2c 1.0.0
2021-02-17T00:15:55Z INSTALLED
https://cloudsolutions.academy/solution/enforcing-a-policy-using-
anthos-config-managements-policy-controller/
Anthos Config Management’s (ACM) Policy Controller allows
us to write such governance based policies for your clusters.
Policy Controller is based on Open Policy Agent (OPA)
Gatekeeper project and contains library of pre-defined policies that can be used to
guard your cluster against any
compliance or security violation.
The policy you write acts a guardrail that enforces a rule
that determines whether the target resource should be admitted into the cluster or
not.
It acts as admission controller webhook that integrates
with Kubernetes API server to validate the objects as they are admitted to the
cluster.
The guardrail can also be used to audit objects for any
security loophole.
Constraint based Framework
The Policy Controller uses OPA Gatekeeper project
which models a constraint based framework.
The said framework has three main components:
Constraint
Rego
Constraint template
The policy and governance are important security aspects
that are addressed by ACM’s Policy Controller through OPA Gatekeeper framework.
It allows you to write user defined policies that keeps the
Kubernetes clusters compliant as per the policy.

GKE On-prem
https://datacenterrookie.wordpress.com/2019/07/24/google-kubernetes-engine-
gke-on-prem-part-i/
GKE On-Prem enables users to run Google Kubernetes Engine (GKE) clusters
inside their own datacenters.
GKE Connect
Provides ability to establish new connections between external clusters and
Google.
With the Anthos GKE Connect Agent installed on your Kubernetes cluster, that
cluster can reside anywhere, as long as it can connect to Anthos.
Server Name Indication (SNI)
is an extension to the TLS protocol.
It allows a client or browser to indicate which hostname it is trying to
connect to at the start of the TLS handshake.
This allows the server to present multiple certificates on the same IP
address and port number.

You might also like