Professional Documents
Culture Documents
Anthos Setup
https://cloudsolutions.academy/solution/anthos-setup/
Prerequisites
Google project with billing enabled
Anthos APIs are enabled. It will allow you to use Anthos features
The command line tool – gcloud. You will get this by installing Cloud
SDK
Create Service Account
gcloud iam service-accounts create gke-anthos --project=${PROJECT_ID}
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member="serviceAccount:gke-anthos@$
{PROJECT_ID}.iam.gserviceaccount.com" \
--role="roles/owner"
gcloud iam service-accounts keys create gke-anthos-key.json \
--iam-account=gke-anthos@${PROJECT_ID}.iam.gserviceaccount.com \
--project=${PROJECT_ID}
Granting owner role is not recommended and the principle of least
privilege should be followed when granting permissions to users or service
accounts.
Create a GKE cluster
Registering the cluster
https://cloud.google.com/sdk/gcloud/reference/container/hub/memberships/
register
Register a GKE cluster
gcloud container hub memberships register anthos --gke-
cluster=asia-southeast1-a/anthos-cluster --service-account-key-file=
The above command registers the anthos-cluster cluster and
creates a membership named anthos.
It will install the Connect Agent on the cluster that will
enable you to view
and manage your cluster from the Anthos dashboard.
The Connect Agent will authenticate to Google using the
above created service account JSON key.
Registering a cluster indicates that the cluster is now in
the realm of Anthos ecosystem.
Register a non-GKE or GKE On-Prem cluster referenced from a specific
kubeconfig file, and install the Connect Agent:
gcloud container hub memberships register my-cluster --
context=my-cluster-context --kubeconfig=/home/user/custom_kubeconfig --service-
account-key-file=/tmp/keyfile.json
Register EKS cluster
gcloud container hub memberships register aws --
context=arn:aws:eks:ap-south-1:798531306129:cluster/aws-cluster --
kubeconfig=~/.kube/config \
--service-account-key-file=gke-anthos-key.json
Setting up Anthos Service Mesh (ASM)
prerequisites
git
kpt
kubectl
jq
curl https://storage.googleapis.com/csm-artifacts/asm/install_asm_1.10
> install_asm
./install_asm \
--project_id PROJECT_ID \
--cluster_name anthos-cluster \
--cluster_location asia-southeast1-a \
--mode install \
--ca mesh_ca \
--output_dir \
--enable_all
The Certificate Authority used is Mesh CA. Mesh CA is a Google
managed private certificate authority that issues certificates for mutual TLS
authentication within the service
mesh.
The output_dir is where the script downloads the asm packages and
Istio command line tool istioctl that can be used to customize the installation and
also to debug and
diagnose the mesh.
The enable_all option enables the required Google APIs needed to
install ASM and set IAM permissions.
Enable the Sidecar Proxy
ASM provides mesh functionality through the use of sidecar
containers or envoy proxies or proxy containers.
The sidecar container runs alongside your primary container in
the same pod.
As part of ASM installation, you have to enable the automatic
injection of sidecar proxy.
It effectively means you have to label your namespaces with the
above ASM revision tag that will spin up proxy container alongside your main
container as part of that
namespace.
kubectl label namespace <namespace> istio-injection-
istio.io/rev=<asm-revision> --overwrite
The above command will ensure that your pods will now have
a mesh proxy container alongside its primary container.
If the pods are already running, then you have to restart
the pods to trigger the injection.
https://cloudsolutions.academy/solution/anthos-asm-cross-cluster-mesh-
with-different-network/
Prerequisites
You have already setup a Google project with two VPCs in
separate regions.
You can opt for any available two regions to implement this
use case. For this article we will take Mumbai as one region and Singapore as
another.
You have Anthos GKE cluster setup in both the VPCs along
with ASM version 1.8.3 installed.
You will also have to install istio operator that will
represent ingress gateway to serve the east-west traffic across two clusters.
See section Set up east-west gateway below.
The ASM Certificate Authority (CA) used will be Mesh CA
(only available for GKE clusters).
You could also use Citadel CA as an alternate option.
Set the cluster context
kubectl config get-contexts -o name
gke_sandbox-111111_asia_south1-a_cluster-1
gke_sandbox-111111_asia_southeast1-a_cluster-2
export ctx1=gke_sandbox-111111_asia_south1-a_cluster-1
export ctx2=gke_sandbox-111111_asia_southeast1-a_cluster-2
Setup endpoint discovery between clusters
In this step you will enable each cluster to discover
service endpoints of their counterpart, so that cluster one will discover service
endpoints of the second cluster
and vice versa.
istioctl x create-remote-secret --context=$ctx1 --
name=cluster-1 | \
kubectl apply -f - --context=$ctx2
istioctl x create-remote-secret --context=$ctx2 --
name=cluster-2 | \
kubectl apply -f - --context=$ctx1
You enable this by creating secrets for each cluster that
grants access to kube API server of that cluster.
Each secret is the certificate derived from the common root
CA, in this case Mesh CA.
You then apply the secret to the other cluster.
In that way secrets are exchanged and the clusters are able
to see the service endpoints of each other.
In this case the endpoint of ingress gateway that will
serve the east-west traffic will be discovered.
Set up east-west gateway
Before installing ASM, create an istio operator config file
in both the clusters which will set up an ingress gateway for east-west traffic
(cross-cluster).
The Mumbai cluster will represent west side of the traffic
while the Singapore cluster will represent east side of the traffic.
The config file for the Mumbai cluster will look like the
following:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster-1
network: vpc-1
components:
ingressGateways:
- name: istio-eastwestgateway
label:
istio: eastwestgateway
app: istio-eastwestgateway
topology.istio.io/network: vpc-1
enabled: true
k8s:
env:
- name: ISTIO_META_ROUTER_MODE
value: "sni-dnat"
- name: ISTIO_META_REQUESTED_NETWORK_VIEW
value: vpc-1
service:
ports:
- name: status-port
port: 15021
targetPort: 15021
- name: tls
port: 15443
targetPort: 15443
- name: tls-istiod
port: 15012
targetPort: 15012
- name: tls-webhook
port: 15017
targetPort: 15017
You can apply the above configuration as an overlay file
while installing ASM using the -f flag
istioctl install --context=$ctx1 -f istio-ingress-
eastwest.yaml
Important thing to note is the port 15443 which is used for
cross cluster communication.
In order to expose this port for mesh services to
communicate, you will have to create a custom gateway that will use Service Name
Indication (SNI) routing with TLS
extension
on this port.
The below code depicts the custom gateway that will
actually facilitate east-west routing.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: multi-network
namespace: istio-system
spec:
selector:
istio: eastwestgateway
servers:
- port:
number: 15443
name: tls
protocol: TLS
tls:
mode: AUTO_PASSTHROUGH
hosts:
- "*.local"
The above gateway named multi-network exposes port 15443
using SNI (with TLS extension) based routing allowing Kubernetes services
with .local domain to communicate
with Mumbai cluster mesh via this gateway.
The TLS mode used is AUTO_PASSTHROUGH.
This will, in effect, not terminate the TLS
connection at the gateway and instead pass on the request to the target backend
service.
This mode also assumes that the client or the service from
the calling cluster has initiated a TLS handshake.
So if the service from the Singapore cluster uses plain
vanilla http:// and not https://, the gateway will still pass through the request.
This can create security loophole if the target backend
does not handle TLS termination.
We have kept it simple, as the purpose of this use case is
to simply demonstrate cross cluster communication between two VPCs.
Apply both the above config file in the Singapore cluster
(just change the name of the network) and you should be all set.
https://cloudsolutions.academy/solution/anthos-asm-multi-cluster-
concept/
Anthos Service Mesh (ASM) supports federated mesh where every cluster
will host both mesh control and data plane components.
Multi cluster service mesh (single VPC network)
In this mesh network topology, two clusters will be part of
same VPC and there will be direct connectivity between services in the mesh across
clusters.
Both the clusters must be setup to trust each other.
This can be achieved by exchanging secrets (certificate
derived from common root CA) of the other cluster thereby enabling access to
Kubernetes (GKE) API servers.
In effect, cluster one will be able to access API server
and discover endpoints of the second cluster and vice versa.
As both the clusters are part of same VPC, there is no need
of routing via ingress gateway. Service enpoints in both the clusters can directly
communicate with each
other.
Multi cluster service mesh (different VPC networks)
In this mesh network topology, two clusters will be set up
in different VPC networks. Services in a mesh cannot communicate directly across
clusters but have to make
use of gateway to route the east-west traffic.
To implement this model, you have to again set up trust
between clusters, as discussed in the previous section, to enable endpoints
discovery across clusters.
You then have to setup an ingress gateway that will allow
east-west traffic in both the clusters.
The gateway endpoint will be accessible over public
internet but will expose only services with *.local domain.
This will ensure services endpoints from both the clusters
will be able to communicate via this gateway thereby enabling east-west traffic.
The gateway will also ensure that only mTLS enabled
services are able to communicate with each other thereby making sure that services
are indeed part of the
recognized clusters.
Recalls
Register cluster
Alternate
(https://cloud.google.com/anthos/fleet-management/docs/fleet-creation)
Fleet host project
Anthos clusters on VMware and Anthos clusters on bare
metal are automatically registered to your chosen fleet at cluster creation time,
with your fleet host project and other registration
details specified in the relevant cluster configuration file.
Manual registeration
gcloud container fleet memberships register
MEMBERSHIP_NAME \
--context=KUBECONFIG_CONTEXT \
--kubeconfig=KUBECONFIG_PATH \
--service-account-key-
file=SERVICE_ACCOUNT_KEY_PATH
Enable sidecar proxy
Istio injection
Certificate CA
mesh ca (For GKE only)
citadel ca
Exchange secrets between clusters
Eastwest gateway
IstioOperator
ISTIO_META_ROUTER_MODE
"sni-dnat"
Port 15443
is used for cross cluster communication
Gateway
tls mode
Auto PAsstrhough
hosts
*.local
GKE On-prem
https://datacenterrookie.wordpress.com/2019/07/24/google-kubernetes-engine-
gke-on-prem-part-i/
GKE On-Prem enables users to run Google Kubernetes Engine (GKE) clusters
inside their own datacenters.
GKE Connect
Provides ability to establish new connections between external clusters and
Google.
With the Anthos GKE Connect Agent installed on your Kubernetes cluster, that
cluster can reside anywhere, as long as it can connect to Anthos.
Server Name Indication (SNI)
is an extension to the TLS protocol.
It allows a client or browser to indicate which hostname it is trying to
connect to at the start of the TLS handshake.
This allows the server to present multiple certificates on the same IP
address and port number.