Professional Documents
Culture Documents
You can find the most up-to-date technical documentation on the VMware by Broadcom website at:
https://docs.vmware.com/
VMware by Broadcom
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com
©
Copyright 2021-2024 Broadcom. All Rights Reserved. The term “Broadcom” refers to Broadcom Inc.
and/or its subsidiaries. For more information, go to https://www.broadcom.com. All trademarks, trade
names, service marks, and logos referenced herein belong to their respective companies. Copyright and
trademark information.
VMware by Broadcom 2
Contents
Updated Information 7
VMware by Broadcom 3
Getting Started with VMware vSphere Container Storage Plug-in
Enable Volume Snapshot and Restore After an Upgrade to Version 2.5.x or Later 59
Migrating In-Tree vSphere Volumes to vSphere Container Storage Plug-in 59
Enable Migration of In-Tree vSphere Volumes to vSphere Container Storage Plug-in 63
Deploying vSphere Container Storage Plug-in on Windows 68
Enable vSphere Container Storage Plug-in with Windows Nodes 69
VMware by Broadcom 4
Getting Started with VMware vSphere Container Storage Plug-in
VMware by Broadcom 5
Getting Started with VMware vSphere
Container Storage Plug-in
The Getting Started with VMware vSphere Container Storage Plug-in documentation provides
®
information about setting up and using VMware vSphere Container Storage™ Plug-in. vSphere
Container Storage Plug-in, also called the upstream vSphere CSI driver, is a volume plug-in that
runs in a native Kubernetes cluster deployed in vSphere and is responsible for provisioning
persistent volumes on vSphere storage.
At VMware, we value inclusion. To foster this principle within our customer, partner, and internal
community, we create content using inclusive language.
Intended Audience
This information is intended for developers and vSphere administrators who have a basic
understanding of Kubernetes and are familiar with container deployment concepts.
VMware by Broadcom 6
Updated Information
This Getting Started with VMware vSphere Container Storage Plug-in is updated with each
release of the product or when necessary.
This table provides the update history of the Getting Started with VMware vSphere Container
Storage Plug-in.
Revision Description
11 DEC 2023 Added a statement about limitations of RWX volumes backed by vSAN File Service. See Provisioning File
Volumes with vSphere Container Storage Plug-in.
08 DEC 2023 Updated information about the vSphere version supported for HCI Mesh deployment in vSphere
Functionality Supported by vSphere Container Storage Plug-in.
21 NOV 2023 Updated information about Storage vMotion in vSphere Functionality Supported by vSphere Container
Storage Plug-in.
19 SEP 2023 Added information about vSphere versions recommended in Migrating In-Tree vSphere Volumes to
vSphere Container Storage Plug-in.
15 SEP 2023 n Updated information about setting up vSAN stretched cluster in Deploy Kubernetes and Persistent
Volumes on a vSAN Stretched Cluster.
n Added a new section about HCI Mesh deployment in Using vSphere Container Storage Plug-in for HCI
Mesh Deployment.
13 JUL 2023 n Updated the prerequisites in Deploy vSphere Container Storage Plug-in with Topology.
n Added information about patch releases to Kubernetes Versions Compatible with vSphere Container
Storage Plug-in.
20 JUN 2023 Corrected the name of the VM storage policies privilege. See vSphere Roles and Privileges.
12 MAY 2023 Updated the prerequisites in Use a Secure Connection for vSphere Container Storage Plug-in and Use a
Secure Connection in the Environment with Multiple vCenter Server Instances.
27 APR 2023 Updated the PVC and PV example in Statically Provision a Block Volume with vSphere Container Storage
Plug-in.
24 APR 2023 Updated the topology example in Deploying vSphere Container Storage Plug-in with Multiple vCenter
Server Instances.
18 APR 2023 Updated support statement for thick provisioning on VMFS datastores. See vSphere Functionality
Supported by vSphere Container Storage Plug-in.
13 APR 2023 Changed maximum supported Kubernetes version from 1.26 to 1.27. See Compatibility Matrices for
vSphere Container Storage Plug-in.
VMware by Broadcom 7
vSphere Container Storage Plug-
in Concepts 1
Cloud Native Storage (CNS) integrates vSphere and Kubernetes and offers capabilities to create
and manage container volumes in vSphere environment. CNS consists of the two components,
CNS component in vCenter Server and a vSphere volume driver in Kubernetes, called vSphere
Container Storage Plug-in.
The main goal of CNS is to enable vSphere and vSphere storage, including vSAN, as a platform
to run stateful Kubernetes workloads. vSphere offers a highly reliable and performant data path,
mature for enterprise use. CNS enables access of this data path to Kubernetes and brings an
understanding of Kubernetes volume and pod abstractions to vSphere.
VMware by Broadcom 8
Getting Started with VMware vSphere Container Storage Plug-in
Devops
Kubernetes Cluster
VI Admin
vSphere Container
Storage Plug-in vSphere Client
vSphere
vCenter Server
CNS
Cache DB
VMware by Broadcom 9
Getting Started with VMware vSphere Container Storage Plug-in
CNS leverages the existing Storage Policy Based Management (SPBM) functionality for volume
provisioning. The DevOps users can use the storage policies, created by the vSphere
administrator in vSphere, to specify the storage SLAs for the application volumes within
Kubernetes. CNS enables the DevOps users to self-provision storage for their apps with
appropriate storage SLAs. CNS honors these storage SLAs by provisioning the volume on an
SPBM policy-compliant datastore in vSphere. The SPBM policy is applied at the granularity of a
container volume.
CNS supports block volumes backed by First Class Disk (FCD) and file volumes backed by vSAN
file shares. A block volume can only be attached to one Kubernetes pod with ReadWriteOnce
access mode at any point in time. A file volume can be attached to one or more pods with
ReadWriteMany/ReadOnlyMany access modes.
n CSI Plug-in: The CSI plug-in is responsible for volume provisioning, attaching and detaching
the volume to VMs, mounting, formatting, and unmounting volumes from the pod within the
node VM, and so on. It is built as an out-of-tree CSI plug-in for Kubernetes.
n Syncer: The syncer is responsible for pushing the PV, PVC, and pod metadata to CNS. It also
offers a CNS operator that is used in vSphere with Tanzu. For information, see the vSphere
with Tanzu documentation.
VMware by Broadcom 10
Getting Started with VMware vSphere Container Storage Plug-in
The vSphere Container Storage Plug-in controller provides an interface used by the Container
Orchestrators to manage the life cycle of vSphere volumes. It also allows you to create,
expand and delete volumes, attach and detach volumes to Node VMs.
The vSphere Container Storage Plug-in node allows you to format and mount the volumes to
nodes, and use bind mounts for the volumes inside the pod. Before the volume is detached,
the vSphere Container Storage Plug-in node helps to unmount the volume from the node.
The vSphere Container Storage Plug-in node runs as a daemonset inside the cluster.
Syncer
The Metadata Syncer is responsible for pushing PV, PVC, and pod metadata to CNS. The
data appears under the CNS dashboard in the vSphere Client. The data assists the vSphere
administrators to determine which Kubernetes clusters, apps, pods, PVCs, and PVs are using
the volume.
Full synchronization is responsible for keeping the CNS up to date with Kubernetes volume
metadata information such as PVs, PVCs, pods, and so on. The full synchronization is helpful
in the following cases:
In addition, availability of specific Kubernetes functionality that vSphere Container Storage Plug-
in supports might require a combination of specific vSphere and vSphere Container Storage
Plug-in versions. Make sure that you follow these requirements. See Supported Kubernetes
Functionality.
VMware by Broadcom 11
Getting Started with VMware vSphere Container Storage Plug-in
8.0 2.7
When you use vSphere Container Storage Plug-in with vSphere, the following considerations
apply:
n Make sure that your vCenter Server and ESXi versions match. If you have a newer vCenter
Server version, but older ESXi hosts, new features added in the latest vCenter Server do not
work until you upgrade all ESXi hosts to the newer version.
n For bug fixes and performance improvements, you can deploy the latest patch version
of vSphere Container Storage Plug-in without upgrading vSphere. The driver is backward
compatible with older vSphere releases.
Note To take advantage of critical bug fixes, make sure to upgrade to the latest patch release
available for each minor version of vSphere Container Storage Plug-in. For more information
on specific bug fixes, see Release Notes on the VMware vSphere Container Storage Plug-in
Documentation page.
VMware by Broadcom 12
Getting Started with VMware vSphere Container Storage Plug-in
VMware by Broadcom 13
Getting Started with VMware vSphere Container Storage Plug-in
In addition, VMware provides support to GA (Kubernetes Beta) features that have been declared
as GA with vSphere Container Storage Plug-in, but are still at a Beta stage with Kubernetes. Note
that because feature details might change after they transition to the GA status with Kubernetes,
you might need to perform additional configuration steps during the vSphere Container Storage
Plug-in upgrade. For information about Kubernetes feature stages, see https://kubernetes.io/
docs/reference/command-line-tools-reference/feature-gates/#using-a-feature.
Some features might be supported only at Alpha or Beta level. Alpha and Beta features do not
receive sufficient testing and are not recommended for production use. VMware Support team
does not support issues reported for these features. Upgrades from Alpha to Beta and from
Beta to GA are not supported, because each subsequent release might introduce incompatible
changes.
Alpha and Beta features are not documented in the Getting Started with VMware vSphere
Container Storage Plug-in documentation. For information about Alpha and Beta features, see
https://github.com/kubernetes-sigs/vsphere-csi-driver/tree/master/docs/book/features.
VMware by Broadcom 14
Getting Started with VMware vSphere Container Storage Plug-in
Minimum
Required
vSphere
Container
Storage
Support Plug-in
Feature Status Version vSphere 7.0 and Later vSphere 6.7 Update 3
VMware by Broadcom 15
Getting Started with VMware vSphere Container Storage Plug-in
Minimum
Required
vSphere
Container
Storage
Support Plug-in
Feature Status Version vSphere 7.0 and Later vSphere 6.7 Update 3
VMware by Broadcom 16
Getting Started with VMware vSphere Container Storage Plug-in
Minimum
Required
vSphere
Container
Storage
Support Plug-in
Feature Status Version vSphere 7.0 and Later vSphere 6.7 Update 3
n vSphere Container Storage Plug-in is backward and forward compatible to vSphere releases.
n Features added in the latest vSphere releases do not work on the older vSphere Container
Storage Plug-in.
n For more information about upgrading vSphere Container Storage Plug-in, see Upgrading
vSphere Container Storage Plug-in.
VMware by Broadcom 17
Getting Started with VMware vSphere Container Storage Plug-in
Number of volumes n 10000 volumes per vCenter Server for 100 file shares per vSAN cluster
vSAN, NFS 3, and VMFS datastores 100 concurrent clients for RWM PVs
n 840 volumes per vCenter Server for Virtual
Volumes datastores
Note
n For higher availability, run vSphere CSI Controller with minimum of three replicas. If your
Kubernetes cluster has more than three control plane nodes, set the CSI Controller replicas to
match the number of control plane nodes on the cluster.
n If your development or test Kubernetes cluster does not contain multiple control plane nodes,
set the replica to one.
n Limits for Single Access Volume are applicable to both single access file system volumes and
single access block volumes.
n vSphere Container Storage Plug-in uses only Paravirtual SCSI controllers to attach volumes to
node VMs. Each non Paravirtual SCSI controller on the node VM reduces the maximum limit of
RWO PVs per node VM by 15.
VMware by Broadcom 18
Getting Started with VMware vSphere Container Storage Plug-in
vMotion Yes
vSAN, including vSAN Express Storage Architecture (ESA), Virtual Volumes, NFS 3, and Yes
VMFS Datastores
VM Encryption Yes
VMware by Broadcom 19
Getting Started with VMware vSphere Container Storage Plug-in
VMware by Broadcom 20
vSphere Container Storage Plug-
in Deployment 2
You can install the vSphere Container Storage Plug-in on a generic, also called vanilla, Kubernetes
cluster. Before installing the vSphere Container Storage Plug-in, you must follow specific
prerequisites. You can later upgrade the plug-in to a new release.
Installation procedures in this section apply only to generic Kubernetes clusters. Supervisor
clusters and Tanzu Kubernetes clusters in vSphere with Tanzu use the preinstalled vSphere
Container Storage Plug-in.
n Deploying vSphere Container Storage Plug-in with Multiple vCenter Server Instances
For information, see vSphere Versions Compatible with vSphere Container Storage Plug-in.
VMware by Broadcom 21
Getting Started with VMware vSphere Container Storage Plug-in
To know how to create and assign a role, see vSphere Security Documentation.
CNS- Datastore > Low level file operations Allows performing read, Shared datastore where
Datastore write, delete, and persistent volumes reside.
rename operations in the
datastore browser.
CNS-HOST- Host > Configuration > Storage partition Allows vSAN datastore Required on a vSAN cluster
CONFIG- configuration management. with vSAN file service.
STORAGE Required for file volume only.
CNS-VM Virtual machine > Change Configuration > Allows adding an existing All node VMs.
Add existing disk virtual disk to a virtual
machine.
VMware by Broadcom 22
Getting Started with VMware vSphere Container Storage Plug-in
Read-only Default role Users with the Read Only All hosts where the nodes
role for an object are VMs reside.
allowed to view the state Data center.
of the object and details
about the object. For
example, users with this
role can find the shared
datastore accessible to
all node VMs.
For topology-aware
environments, all
ancestors of node VMs,
such as a host, cluster,
folder, and data center,
must have the Read-
only role set for the
vSphere user configured
to use vSphere Container
Storage Plug-in. This is
required to allow reading
tags and categories
to prepare the nodes'
topology.
You need to assign roles to the vSphere objects participating in the Cloud Native Storage
environment. Make sure to apply roles when a new entity, such as node VM or a datastore,
is added in the vCenter Server inventory for the Kubernetes cluster.
The following sample vSphere inventory provides more information about roles assignment in
vSphere objects.
VMware by Broadcom 23
Getting Started with VMware vSphere Container Storage Plug-in
| |
|-10.192.218.26 (ESXi Host)
| |
| |-k8s-node3 (node-vm)
As an example, assume that each host has the following shared datastores along with some local
VMFS datastores.
n shared-vmfs.
n shared-nfs.
n vsanDatastore.
Role Usage
ReadOnly
CNS-HOST-CONFIG-STORAGE
CNS-DATASTORE
VMware by Broadcom 24
Getting Started with VMware vSphere Container Storage Plug-in
Role Usage
CNS-VM
CNS-SEARCH-AND-SPBM
For more information on providing vCenter Server credentials access to Kubernetes nodes, see
Deploy vSphere Container Storage Plug-in with Topology.
Configure all VMs that form the Kubernetes cluster with vSphere Container Storage Plug-in. You
can configure the VMs using the vSphere Client or the govc command-line tool.
Prerequisites
n On each node VM, install VMware Tools. For more information about installation, see
Installing and upgrading VMware Tools in vSphere.
VMware by Broadcom 25
Getting Started with VMware vSphere Container Storage Plug-in
Procedure
If the parameter exists, make sure that its value is set to True. If the parameter is not
present, add it and set its value to True.
Name Value
disk.EnableUUID True
b On the Virtual Hardware tab, click the Add New Device button.
d Expand New SCSI controller and from the Change Type menu, select VMware
Paravirtual.
e Click OK.
Example
As an alternative, you can configure the VMs using the govc command-line tool.
2 Obtain VM paths.
$ export GOVC_INSECURE=1
$ export GOVC_URL='https://<VC_Admin_User>:<VC_Admin_Passwd>@<VC_IP>'
$ govc ls
/<datacenter-name>/vm
/<datacenter-name>/network
/<datacenter-name>/host
/<datacenter-name>/datastore
VMware by Broadcom 26
Getting Started with VMware vSphere Container Storage Plug-in
Prerequisites
Ensure that you have the following permissions before you install a Cloud Provider Interface on
your Kubernetes cluster in the vSphere environment:
n Read permission on the parent entities of the node VMs such as folder, host, datacenter,
datastore folder, and datastore cluster.
If your environment includes multiple vCenter Server instances, see Install vSphere Cloud
Provider Interface in an Environment with Multiple vCenter Server Instances.
Procedure
1 Before you install CPI, verify that all nodes, including the control plane nodes, are tainted with
node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule.
2 Identify the Kubernetes major version. For example, if the major version is 1.22.x, then run the
following.
VERSION=1.22
wget https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/release-$VERSION/
releases/v$VERSION/vsphere-cloud-controller-manager.yaml
VMware by Broadcom 27
Getting Started with VMware vSphere Container Storage Plug-in
Note This is used for CPI. There is a separate secret required for vSphere Container Storage
Plug-in.
apiVersion: v1
kind: Secret
metadata:
name: vsphere-cloud-secret
labels:
vsphere-cpi-infra: secret
component: cloud-controller-manager
namespace: kube-system
# NOTE: this is just an example configuration, update with real values based on your
environment
stringData:
10.185.0.89.username: "Administrator@vsphere.local"
10.185.0.89.password: "Admin!23"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: vsphere-cloud-config
labels:
vsphere-cpi-infra: config
component: cloud-controller-manager
namespace: kube-system
data:
# NOTE: this is just an example configuration, update with real values based on your
environment
vsphere.conf: |
# Global properties in this section will be used for all specified vCenters unless
overriden in VirtualCenter section.
global:
port: 443
# set insecureFlag to true if the vCenter uses a self-signed cert
insecureFlag: true
# settings for using k8s secret
secretName: vsphere-cloud-secret
secretNamespace: kube-system
# vcenter section
vcenter:
my-vc-name:
server: 10.185.0.89
user: Administrator@vsphere.local
password: Admin!23
datacenters:
- VSAN-DC
VMware by Broadcom 28
Getting Started with VMware vSphere Container Storage Plug-in
4 Apply the release manifest with updated values for the config map.
This action creates Roles, Roles Bindings, Service Account, Service and the cloud-controller-
manager pod.
rm vsphere-cloud-controller-manager.yaml
Note
n You can use the external custom cloud provider CPI with vSphere Container Storage
Plug-in.
Procedure
u Modify the CoreDNS ConfigMap and add the conditional forwarder configuration.
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
VMware by Broadcom 29
Getting Started with VMware vSphere Container Storage Plug-in
loop
reload
loadbalance
}
vsanfs-sh.prv:53 {
errors
cache 30
forward . 10.161.191.241
}
In this configuration,
n 10.161.191.241 is the DNS server that resolves the file share host name.
You can obtain the DNS suffix and DNS IP address from vCenter Server using the following
menu options:
vSphere Cluster > Configure > vSAN > Services > File Service
Before installing the vSphere Container Storage Plug-in, ensure that your environment meets all
required prerequisites. For more information, see Preparing for Installation of vSphere Container
Storage Plug-in.
Perform all installation procedures on the same Kubernetes node where you deploy the vSphere
Container Storage Plug-in. VMware recommends that you install the vSphere Container Storage
Plug-in on the Kubernetes control plane node.
Procedure
2 Taint Kubernetes Control Plane Node for the vSphere Container Storage Plug-in Installation
Before installing the vSphere Container Storage Plug-in in your generic Kubernetes
environment, make sure that you taint the control plane node with the node-
role.kubernetes.io/control-plane=:NoSchedule parameter.
VMware by Broadcom 30
Getting Started with VMware vSphere Container Storage Plug-in
Procedure
u To create the vmware-system-csi namespace in vSphere Container Storage Plug-in, run the
following command.
Note To be able to take advantage of the latest bug fixes and feature updates, make sure to
use the most recent version of vSphere Container Storage Plug-in. For versions and updates,
see Release Notes on the VMware vSphere Container Storage Plug-in Documentation page.
Procedure
VMware by Broadcom 31
Getting Started with VMware vSphere Container Storage Plug-in
Taints: <none>
Name: <k8s-worker3-name>
Taints: <none>
Name: <k8s-worker4-name>
Taints: <none>
Before installing the vSphere Container Storage Plug-in on a native Kubernetes cluster, create a
configuration file that contains details to connect to vSphere. The default file for the configuration
details is the csi-vsphere.conf file. If you prefer to use a file with another name, change the
environment variable VSPHERE_CSI_CONFIG in the deployment YAMLs. For more information, see
Install the vSphere Container Storage Plug-in.
For information about topology-aware deployments, see Deploy vSphere Container Storage
Plug-in with Topology.
For information about deployments with multiple vCenter Server instances, see Deploying
vSphere Container Storage Plug-in with Multiple vCenter Server Instances.
Procedure
n Block volumes.
vSphere configuration file for block volumes includes the following sample entries.
$ cat /etc/kubernetes/csi-vsphere.conf
[Global]
cluster-id = "<cluster-id>"
cluster-distribution = "<cluster-distribution>"
ca-file = <ca file path> # optional, use with insecure-flag set to false
thumbprint = "<cert thumbprint>" # optional, use with insecure-flag set to false
without providing ca-file
VMware by Broadcom 32
Getting Started with VMware vSphere Container Storage Plug-in
VMware by Broadcom 33
Getting Started with VMware vSphere Container Storage Plug-in
Note To deploy the vSphere Container Storage Plug-in for block volumes in VMware
Cloud environment, you must enter the cloud administrator username and password in
the vSphere configuration file.
n File volumes.
For file volumes, it is optional to add parameters that specify network permissions
and placement of volumes. Otherwise, default values will be used. Use the following
configuration file as an example.
$ cat /etc/kubernetes/csi-vsphere.conf
[Global]
cluster-id = "<cluster-id>"
cluster-distribution = "<cluster-distribution>"
ca-file = <ca file path> # optional, use with insecure-flag set to false
[NetPermissions "A"]
ips = "*"
permissions = "READ_WRITE"
rootsquash = false
[NetPermissions "B"]
ips = "10.20.20.0/24"
permissions = "READ_ONLY"
rootsquash = true
[NetPermissions "C"]
ips = "10.30.30.0/24"
permissions = "NO_ACCESS"
[NetPermissions "D"]
ips = "10.30.10.0/24"
rootsquash = true
[NetPermissions "E"]
ips = "10.30.1.0/24"
VMware by Broadcom 34
Getting Started with VMware vSphere Container Storage Plug-in
VMware by Broadcom 35
Getting Started with VMware vSphere Container Storage Plug-in
rm csi-vsphere.conf
Prerequisites
Make sure to enter false as a value for the insecure-flag parameter in the vSphere
configuration file. The value indicates that you plan to use a secure connection.
If your environment includes multiple vCenter Server instances, see Use a Secure Connection in
the Environment with Multiple vCenter Server Instances.
Procedure
3 directories, 6 files
VMware by Broadcom 36
Getting Started with VMware vSphere Container Storage Plug-in
$ cd certs/lin
$ kubectl create configmap vc-root-ca-cert --from-file=6355e8d1.0 --namespace=vmware-
system-csi
configmap/vc-root-ca-cert created
[Global]
.
.
insecure-flag = "false"
ca-file = "/etc/ssl/certs/6355e8d1.0"
.
b Update the vCenter Server details to FQDN as shown in the example below.
[Global]
cluster-id = "cluster1"
cluster-distribution = "CSI-Vanilla"
[VirtualCenter "vCenter-FQDN"]
insecure-flag = "false"
ca-file = /certs/lin/1abc830c.0
user = "administrator@vsphere.local"
password = "Admin!444"
port = "555"
datacenters = "VSAN-DC"
[Snapshot]
global-max-snapshots-per-block-volume = 3
[Labels]
topology-categories = "k8s-zone"
Refer to the following change for the vsphere-csi-controller deployment for vsphere-
csi-controller and vsphere-syncer containers.
.
.
containers:
- name: vsphere-csi-controller
volumeMounts:
- mountPath: /etc/ssl/certs/6355e8d1.0
subPath: 6355e8d1.0
name: vc-root-ca-cert
VMware by Broadcom 37
Getting Started with VMware vSphere Container Storage Plug-in
- name: vsphere-syncer
volumeMounts:
- mountPath: /etc/ssl/certs/6355e8d1.0
subPath: 6355e8d1.0
name: vc-root-ca-cert
.
.
volumes:
- name: vc-root-ca-cert
configMap:
name: vc-root-ca-cert
.
.
5 Apply the above change for vsphere-csi-controller deployment and wait for the
vSphere Container Storage Plug-in controller pods to restart.
Prerequisites
n You use environment with multiple vCenter Server instances. See Deploying vSphere
Container Storage Plug-in with Multiple vCenter Server Instances.
n You entered false as a value for the insecure-flag parameter in the vSphere configuration
file. The value indicates that you plan to use a secure connection instead of using a self-
signed certificate for login.
Procedure
1 For each vCenter Server that needs secure connection, download trusted root CA certificates
from https://vCenter-IP-Address/certs/download.zip, extract the download.zip file
containing certificates, and create config-map using the certificate in the certs/lin
directory.
In the following example, a Kubernetes cluster is spread across two vCenter Server instances.
vSphere Container Storage Plug-in needs to establish a secure connection with both
instances.
VMware by Broadcom 38
Getting Started with VMware vSphere Container Storage Plug-in
── win
├── 6355e8d1.0.crt
└── 6355e8d1.r1.crl 3 directories, 6 files
2 For each vCenter Server instance, create config-map for a root-ca certificate.
$ cd certs-vc1/lin
$ kubectl create configmap vc-1-root-ca-cert --from-file=6355e8d1.0 --namespace=vmware-
system-csi
configmap/vc-1-root-ca-cert created
$ cd certs-vc2/lin
$ kubectl create configmap vc-2-root-ca-cert --from-file=4135e8d1.0 --namespace=vmware-
system-csi
configmap/vc-2-root-ca-cert created
3 For each instance of secure vCenter Server, set insecure-flag to false in the vsphere-
config-secret in the vmware-system-csi namespace.
. .
containers:
- name: vsphere-csi-controller
VMware by Broadcom 39
Getting Started with VMware vSphere Container Storage Plug-in
volumeMounts:
- mountPath: /etc/ssl/certs/6355e8d1.0
subPath: 6355e8d1.0
name: vc-1-root-ca-cert
- mountPath: /etc/ssl/certs/4135e8d1.0
subPath: 4135e8d1.0
name: vc-2-root-ca-cert
- name: vsphere-syncer
volumeMounts:
- mountPath: /etc/ssl/certs/6355e8d1.0
subPath: 6355e8d1.0
name: vc-1-root-ca-cert
- mountPath: /etc/ssl/certs/4135e8d1.0
subPath: 4135e8d1.0
name: vc-2-root-ca-cert
. .
volumes:
- name: vc-1-root-ca-cert
configMap: name: vc-1-root-ca-cert
- name: vc-2-root-ca-cert
configMap: name: vc-2-root-ca-cert
. .
5 Apply the changes you made in Step 4 for vsphere-csi-controller deployment and wait
for the vSphere Container Storage Plug-in controller pods to restart.
If you do not provide the cluster ID field or keep it empty while creating a configuration secret
for vSphere Container Storage Plug-in, it automatically generates a unique cluster ID across all
clusters. vsphere-csi-cluster-id configuration map is created in the namespace where you
have installed vSphere Container Storage Plug-in to store this cluster ID.
Procedure
VMware by Broadcom 40
Getting Started with VMware vSphere Container Storage Plug-in
3 When you install vSphere Container Storage Plug-in version 2.x, specify the cluster ID
retrieved in step 1 when you create a vSphere configuration secret.
To deploy your Kubernetes cluster with topology aware provisioning feature, see Topology-
Aware Volume Provisioning.
Prerequisites
Check which Kubernetes version corresponds to a specific version of the vSphere Container
Storage Plug-in. See Compatibility Matrices for vSphere Container Storage Plug-in. For
information about features that different versions of the vSphere Container Storage Plug-in
support, see Supported Kubernetes Functionality.
Procedure
Note To be able to take advantage of the latest bug fixes and feature updates,
make sure to use the most recent version of the vSphere Container Storage Plug-in. For
versions and updates, see Release Notes on the VMware vSphere Container Storage Plug-in
Documentation page.
If you deploy the vSphere Container Storage Plug-in in a single control plane setup, you can
edit the following field to change the number of replicas to one.
If the driver is already deployed, use the kubectl edit command to reduce the number of
replicas.
kind: Deployment
apiVersion: apps/v1
metadata:
name: vsphere-csi-controller
namespace: vmware-system-csi
spec:
replicas: number_of_replicas
VMware by Broadcom 41
Getting Started with VMware vSphere Container Storage Plug-in
2 Verify that the vSphere Container Storage Plug-in has been successfully deployed.
a Verify that vsphere-csi-controller instances run on the control plane node and vsphere-
csi-node instances run on worker nodes of the cluster.
b Verify that the vSphere Container Storage Plug-in has been registered with Kubernetes.
Results
VMware by Broadcom 42
Getting Started with VMware vSphere Container Storage Plug-in
In a Kubernetes cluster based on a single vCenter Server system, vSphere Container Storage
Plug-in is as highly available as vCenter Server. If vCenter Server fails, vSphere Container Storage
Plug-in stops volume operations. In addition, the performance and throughput of volume life
cycle operations and the scale of volumes are limited to what a single vCenter Server instance
supports.
With the multi-instance vCenter Server functionality, you can stretch the Kubernetes cluster
across multiple vCenter Server instances. This allows you to achieve higher availability,
performance, and scale of persistent volumes. Also, in a multi-zone infrastructure topology, you
can deploy one instance of vCenter Server per availability zone, or fault domain. You can then
stretch the Kubernetes cluster across the availability zones for higher availability, performance,
and scale of persistent volumes.
n Improved scale. A single vCenter Server instance supports a maximum of 10k CNS block
volumes. In a Kubernetes deployment stretched across multiple vCenter Server instances,
vSphere Container Storage Plug-in is able to support 10k CNS block volumes per vCenter
Server.
n The deployment supports only native Kubernetes clusters. The deployment does not support
Tanzu Kubernetes Grid clusters.
n The deployment supports only block volume provisioning. File volume provisioning is not
supported.
VMware by Broadcom 43
Getting Started with VMware vSphere Container Storage Plug-in
n Your deployment must be topology-aware. If you haven't used the topology functionality,
you must recreate the entire cluster to be topology-aware. During volume provisioning,
the topology mechanism helps identify and specify nodes for selecting vSphere storage
resources spread across multiple vCenter Server systems.
n Follow all guidelines for deploying vSphere Container Storage Plug-in with topology. For
information, see Deploying vSphere Container Storage Plug-in with Topology.
n Every node VM on each vCenter Server must have a tag. The value of the tag yields a
unique combination of values across all categories mentioned in the topology-categories
parameter. The topology-categories parameter is specified in vSphere configuration secret
on associated vCenter Server.
Node registration fails if the node does not belong to every category under the topology-
categories parameter. Or if the tag values do not generate unique combination across the
different tag categories in associated vCenter Server.
n vSphere Container Storage Plug-in version 3.0 does not support datastores shared across
multiple vCenter Server instances.
n To provision storage based on any specific storage policy, configure the storage policy
for each individual vCenter Server, so that vCenter Server can follow the policy-based
provisioning requirements. Make sure to use the same policy name and the same policy
parameters on all participatingvCenter Server systems.
n Specify configuration details of all configured vCenter Server instances under a separate
VirtualCenter section in the vSphere configuration file.
VMware by Broadcom 44
Getting Started with VMware vSphere Container Storage Plug-in
Procedure
$ wget https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/release-
VERSION/releases/vVERSION/vsphere-cloud-controller-manager.yaml
Replace VERSION with an appropriate major version of Kubernetes. For example, if the
version is 1.22.x, run the following command:
$ wget https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/
release-1.22/releases/v1.22/vsphere-cloud-controller-manager.yaml
apiVersion: v1
kind: Secret
metadata:
name: vsphere-cloud-secret
labels:
vsphere-cpi-infra: secret
component: cloud-controller-manager
namespace: kube-system
# NOTE: this is just an example configuration, update with real values based on your
environment
stringData:
VC-1-IP.username: "username"
VC-1-IP.password: "password"
VC-2-IP.username: "username"
VC-2-IP.password: "password"
VC-3-IP.username: "username"
VC-3-IP.password: "password"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: vsphere-cloud-config
labels:
vsphere-cpi-infra: config
component: cloud-controller-manager
namespace: kube-system
data:
# NOTE: this is just an example configuration, update with real values based on your
environment
vsphere.conf: |
# Global properties in this section will be used for all specified vCenters unless
VMware by Broadcom 45
Getting Started with VMware vSphere Container Storage Plug-in
# vcenter section
vcenter:
vc-1:
server: <VC-1-IP or VC-1-FQDN>
user: <username>
password: <password>
datacenters:
- datacenter-path
vc-2:
server: <VC-2-IP or VC-2-FQDN>
user: <username>
password: <password>
datacenters:
- datacenter-path
vc-3:
server: <VC-3-IP or VC-3-FQDN>
user: <username>
password: <password>
datacenters:
- datacenter-path
Prerequisites
Make sure that vSphere Container Storage Plug-in is of version 3.0 or later.
Procedure
1 In the vSphere configuration file, add configuration details for all instances of vCenter Server
under the VirtualCenter section.
For information about the configuration file, see Create a Kubernetes Secret for vSphere
Container Storage Plug-in.
VMware by Broadcom 46
Getting Started with VMware vSphere Container Storage Plug-in
Use the following configuration file as an example for provisioning block volumes in a
vSphere environment with two instances of vCenter Server .
$ cat /etc/kubernetes/csi-vsphere.conf
[Global]
cluster-id = "<cluster-id>"
cluster-distribution = "<cluster-distribution>"
[Labels]
topology-categories = "k8s-zone"
For information about enabling topology when deploying vSphere Container Storage Plug-in,
see Deploy vSphere Container Storage Plug-in with Topology.
3 After the installation, verify the topology-aware setup with multiple vCenter Server instances.
Check that the driver pods of vSphere Container Storage Plug-in are up and running.
VMware by Broadcom 47
Getting Started with VMware vSphere Container Storage Plug-in
n If you already use vSphere Container Storage Plug-in to run applications, but haven't used
the topology feature, you must re-create the entire cluster and delete any existing PVCs in
the system to be able to use the topology feature.
n Depending on your vSphere storage environment, you can use different deployment
scenarios for availability zones. For example, you can define availability zones per host, host
cluster, data center or have a combination of these.
n Kubernetes assumes that even though the topology labels applied on a node are mutable,
a node will not move between zones without being destroyed and re-created. See https://
kubernetes.io/docs/reference/labels-annotations-taints/#topologykubernetesiozone. As a
result, if you define the availability zones on the host level, make sure to pin the node VMs to
their respective hosts to prevent migration of these VMs to other availability zones. You can
do this by either creating the node VM on the hosts' local datastore or by setting the DRS
VM-Host affinity rules. For information, see VM-Host Affinity Rules.
An exception to this guideline is an active-passive setup that has storage replicated between
two topology domains as specified in the diagram in Deploy Workloads on a Preferential
Datastore in a Topology-Aware Environment. In this case, you can migrate node VMs
temporarily when either of the topology domains is down.
n Distribute your control plane VMs across the availability zones to ensure high availability.
n Have at least one worker node VM in each availability zone. Following this guideline is helpful
when you use a StorageClass with no topology requirements explicitly set, and Kubernetes
randomly selects a topology domain to schedule a pod in it. In such cases, if the topology
domain does not have a worker node, the pod remains in pending state.
n Do not apply topology related vSphere tags to node VMs. vSphere Container Storage Plug-in
cannot read topology labels applied on the nodes.
n Use the topology-categories parameter in the Labels section of the vSphere configuration
file. This parameter adds topology custom labels to the nodes: topology.csi.vmware.com/
category-name.
VMware by Broadcom 48
Getting Started with VMware vSphere Container Storage Plug-in
vSphere Container Storage Plug-in of releases earlier than 2.4 does not support the
topology-categories parameter.
topology.csi.vmware.com/k8s-zone=zone-a
Annotations: ....
n Each node VM in a topology-aware Kubernetes cluster must belong to a tag under each
category mentioned in the topology-categories parameter in Step 2.a. Node registration fails
if a node does not belong to every category under the topology-categories parameter.
n With vSphere Container Storage Plug-in version 3.1.0 and later, volume provisioning requests
in topology-aware environments attempt to create volumes in datastores accessible to all
hosts under a given topology segment. This includes hosts that do not have Kubernetes node
VMs running on them. For example, if the vSphere Container Storage Plug-in driver gets a
request to provision a volume in zone-a, applied on the Datacenter dc-1, all hosts under dc-1
must have access to the datastore selected for volume provisioning. The hosts include those
that are directly under dc-1 and those that are a part of clusters inside dc-1.
Cluster2 ControlPlaneVM2
n Availability Zone Category: k8s- WorkerNodeVM2
zone WorkerNodeVM3
n Tag: zone-B
Cluster3 ControlPlaneVM3
n Availability Zone Category: k8s- WorkerNodeVM4
zone
n Tag: zone-C
VMware by Broadcom 49
Getting Started with VMware vSphere Container Storage Plug-in
Prerequisites
Note
n With vSphere Container Storage Plug-in version 3.0.2, you can transition from a non-topology
configuration to a topology configuration if there are no Persistent Volumes (PVs) in the
cluster. However, you cannot migrate non-topology setups with PVs to topology setups
n Using vSphere Container Storage Plug-in version 3.0.2, you can migrate non-topology setups
to topology setups without managing internal topology CRs. This simplifies the transition
process if you have already deployed a non-topology setup and want to enable topology
before deploying any workload on the cluster.
To create tags for your zones, ensure that you meet the following prerequisites:
n Have appropriate tagging privileges that control your ability to work with tags.
n Ancestors of node VMs, such as a host, cluster, folder, and data center, must have the
ReadOnly role set for the vSphere user configured to use vSphere Container Storage Plug-in.
This is required to allow reading tags and categories to discover the nodes' topology. For
more information, see vSphere Tagging Privileges in the vSphere Security documentation.
Procedure
n The names you use for the tag categories and tags must be 63 characters or less, begin
and end with an alphanumeric character, and contain only dashes, underscores, dots, or
alphanumerics in between.
n Do not use the same name for two different tag categories.
n Tags you create should be unique across topology domains. For example, you cannot
have zone1 under Region1 and zone1 under Region2.
VMware by Broadcom 50
Getting Started with VMware vSphere Container Storage Plug-in
For information, see Create, Edit, or Delete a Tag Category in the vCenter Server and Host
Management documentation.
a Create two tag categories: k8s-zone and k8s-region.
Category Tag
k8s-region region-1
region-2
k8s-zone zone-A
zone-B
zone-C
zone-D
c Apply corresponding tags to the data center and clusters as indicated in the following
diagram.
For information, see Assign or Remove a Tag in the vCenter Server and Host
Management documentation.
Datacenter1 region-1
Datacenter2 region-2
Cluster1 zone-A
Cluster2 zone-B
Cluster3 zone-C
Cluster4 zone-D
VMware by Broadcom 51
Getting Started with VMware vSphere Container Storage Plug-in
For information about the configuration file, see Create a Kubernetes Secret for vSphere
Container Storage Plug-in.
[Labels]
topology-categories = "k8s-region, k8s-zone"
Make sure the external-provisioner sidecar is deployed with the arguments --feature-
gates=Topology=true and --strict-topology.
To do this, search for the following lines and uncomment them in the YAML file https://
github.com/kubernetes-sigs/vsphere-csi-driver/tree/v3.0.0/manifests/vanilla.
Note To be able to take advantage of the latest bug fixes and feature updates, make
sure to use the most recent version of vSphere Container Storage Plug-in. For versions
and updates, see Release Notes on the VMware vSphere Container Storage Plug-in
Documentation page.
For information about deploying vSphere Container Storage Plug-in, see Install the
vSphere Container Storage Plug-in.
VMware by Broadcom 52
Getting Started with VMware vSphere Container Storage Plug-in
b Verify that all nodes have the right topology labels set on them.
VMware by Broadcom 53
Getting Started with VMware vSphere Container Storage Plug-in
What to do next
Procedures in this section apply only to native, also called vanilla, Kubernetes clusters deployed
in vSphere environment. To upgrade vSphere with Tanzu, see vSphere with Tanzu Configuration
and Management.
n Be familiar with installation prerequisites and procedures for vSphere Container Storage Plug-
in. See Preparing for Installation of vSphere Container Storage Plug-in and Deploying the
vSphere Container Storage Plug-in on a Native Kubernetes Cluster.
n Ensure that roles and privileges in your vSphere environment are updated. For more
information, see vSphere Roles and Privileges.
n To upgrade to vSphere Container Storage Plug-in 2.3.0, you need DNS forwarding
configuration in CoreDNS ConfigMap to help resolve vSAN file share hostname. For more
information, see Configure CoreDNS for vSAN File Share Volumes.
n If you have RWM volumes backed by file service deployed using vSphere Container Storage
Plug-in, remount the volumes before you upgrade vSphere Container Storage Plug-in.
n When upgrading from Beta topology to GA in vSphere Container Storage Plug-in, follow
these recommendations. For information about deployments with topology, see Deploy
vSphere Container Storage Plug-in with Topology.
n If you have used the topology feature in its Beta version on vSphere Container Storage
Plug-in version 2.3 or earlier, upgrade vSphere Container Storage Plug-in to version 2.4.1
or later to be able to use the GA version of the topology feature.
n If you have used the Beta topology feature and plan to upgrade vSphere Container
Storage Plug-in to version 2.4.1 or later, continue using only the zone and region
parameters.
VMware by Broadcom 54
Getting Started with VMware vSphere Container Storage Plug-in
n If you do not specify Label for a particular topology category while using the zone and
region parameters in the configuration file, vSphere Container Storage Plug-in assumes
the default Beta topology behavior and applies failure-domain.beta.kubernetes.io/XYZ
labels on the node. You do not need to make a mandatory configuration change before
upgrading the driver from Beta topology to GA topology feature.
Earlier vSphere Secret vSphere Secret Configuration Sample Labels on a Node After the
Configuration Before the Upgrade Upgrade
failure-
domain.beta.kubernetes.io/
zone=zone-a
Annotations: ....
n If you intend to use the topology GA labels after upgrading to vSphere Container Storage
Plug-in 2.4.1 or later, make sure you do not have any pre-existing StorageClasses or PV
node affinity rules pointing to the topology beta labels in the environment and then make
the following change in the vSphere configuration secret.
Earlier vSphere Secret vSphere Secret Configuration Sample Labels on a Node After the
Configuration Before the Upgrade Upgrade
Note If you upgrade from 2.3.0 to 2.4.0 or later, you do not need to perform these steps. In
addition, upgrades from v2.2.2, v2.1.2, and v2.0.2 to version v2.3.0 or later do not require this
procedure.
VMware by Broadcom 55
Getting Started with VMware vSphere Container Storage Plug-in
When you perform the following steps in a maintenance window, the process might disrupt
active IOs on the file share volumes used by application pods. If you have multiple replicas of the
pod that access the same file share volume, perform the following steps on each mount point
serially to minimize downtime and disruptions.
Note Use this task only when the vSphere Container Storage Plug-in node daemonset runs
as a container. When it runs as a process on the TKGi platform, the task does not apply.
However, you also need to perform these steps when TKGi is upgraded from the pod-based
driver to process-based driver. For more information, see the following documentation at https://
docs.pivotal.io/tkgi/1-12/vsphere-cns.html#uninstall-csi.
Procedure
In the following example, the volume is attached and mounted on the k8s-worker3 node.
3 To discover where the volume is mounted, log in to the k8s-worker3 node VM and use the
following command.
VMware by Broadcom 56
Getting Started with VMware vSphere Container Storage Plug-in
4 Unmount and remount volume at the same location with the same mount options used for
mounting volume.
For this step, you need to pre-install the nfs-common package on the worker VMs.
a Use the unmount -fl command to unmount the volume.
root@k8s-worker3:~#
umount -fl /var/lib/kubelet/pods/43686ba4-d765-4378-807a-74049fca39ee/volumes/
kubernetes.io~csi/pvc-7e43d1d3-2675-438d-958d-41315f97f42e/mount
b Remount the volume with the same mount options used originally.
What to do next
After you have remounted all the vSAN file share volumes on the worker VMs, upgrade the
vSphere Container Storage Plug-in by reinstalling its YAML files.
The following example illustrates an upgrade of vSphere Container Storage Plug-in from v2.2.0 to
v2.3.0.
VMware by Broadcom 57
Getting Started with VMware vSphere Container Storage Plug-in
Procedure
1 Uninstall the existing version of vSphere Container Storage Plug-in using https://github.com/
kubernetes-sigs/vsphere-csi-driver/tags.
After you run the above commands, wait for the vSphere Container Storage Plug-in controller
pod and vSphere Container Storage Plug-in node pods to be deleted completely.
2 Install vSphere Container Storage Plug-in of your choice, for example, v2.3.0.
Procedure
2 Apply any necessary changes to the manifest pertaining to the release that you wish to use.
For example, adjust the replica count in the vsphere-csi-controller deployment depending
upon the number of control plane nodes in the cluster.
VMware by Broadcom 58
Getting Started with VMware vSphere Container Storage Plug-in
3 To upgrade vSphere Container Storage Plug-in 2.3.0 or later, run the following command.
The following example uses 3.0.0 as a target version, but you can substitute it with any other
version later than 2.3.0.
Note To be able to take advantage of the latest bug fixes and feature updates, make sure to
use the most recent version of vSphere Container Storage Plug-in. For versions and updates,
see Release Notes on the VMware vSphere Container Storage Plug-in Documentation page.
Procedure
1 If you haven't previously enabled the snapshot feature and installed snapshot components,
perform the following steps:
2 If you have enabled the snapshot feature or if any snapshot components exist in the setup,
follow these steps:
VMware by Broadcom 59
Getting Started with VMware vSphere Container Storage Plug-in
vSphere Container Storage Plug-in and CNS provide functionality that is not available with the
in-tree vSphere volume plug-in. For information, see Supported Kubernetes Functionality and
vSphere Functionality Supported by vSphere Container Storage Plug-in.
Note
n Migration of in-tree vSphere volumes to CSI does not work with Kubernetes version 1.29.0.
See https://github.com/kubernetes/kubernetes/issues/122340.
n Kubernetes will deprecate the in-tree vSphere volume plug-in, and it will be removed in the
future Kubernetes releases. Volumes provisioned using the vSphere in-tree plug-in will not
get additional new features supported by the vSphere Container Storage Plug-in.
n vSphere version 7.0 p07 and vSphere version 8.0 Update 2 or later is recommended for
In-tree vSphere volume migration to vSphere Container Storage Plug-in.
n vSphere Container Storage Plug-in migration is released as a Beta feature in Kubernetes 1.19.
For more information, see Release note announcement.
n If you plan to use CSI version 3.0 or 3.1 for migrated volumes, use the latest patch version
3.0.3 or version 3.1.1. These patch versions include the fix for the issue https://github.com/
kubernetes-sigs/vsphere-csi-driver/issues/2534. This issue occurs when both CSI migration
and list-volumes functionality are enabled.
n Kubernetes 1.19 release deprecates vSAN raw policy parameters for the in-tree vSphere
volume plug-in. These parameters will be removed in a future release. For more information,
see Deprecation Announcement.
n The following vSphere in-tree StorageClass parameters are not supported after you enable
migration:
n hostfailurestotolerate
n forceprovisioning
n cachereservation
n diskstripes
n objectspacereservation
VMware by Broadcom 60
Getting Started with VMware vSphere Container Storage Plug-in
n iopslimit
n diskformat
n You cannot rename or delete the storage policy consumed by an in-tree vSphere volume.
Volume migration requires the original storage policy used for provisioning the volume to be
present on vCenter Server for registration of volume as a container volume in vSphere.
n Do not rename the datastore consumed by in-tree vSphere volume. Volume migration relies
on the original datastore name present on the volume source for registration of volume as
container volume in vSphere.
n Make sure to add the following annotations before enabling migration for statically created
vSphere in-tree Persistent Volume Claims, and Persistent Volumes. Statically provisioned in-
tree vSphere volumes cannot be migrated to the vSphere Container Storage Plug-in without
adding these annotations. This also applies to new static in-tree PVs and PVCs created after
you enable migration.
Annotation on PV:
annotations:
pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume
Annotation on PVC:
annotations:
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume
n After migration, if you use vSphere releases earlier than 8.0 Update 1, the only supported
value for diskformat parameter is thin. Volumes created before the migration with the disk
format eagerzeroedthick or zeroedthick are migrated to CSI.
Starting with vSphere 8.0 Update 1, you can use storage policies with thick volume
requirement to migrate eagerzeroedthick or zeroedthick volumes. For more information, see
Create a VM Storage Policy for VMFS Datastore in the vSphere Storage documentation.
n vSphere Container Storage Plug-in does not support raw vSAN policy parameters. After you
enable the migration, vSphere Container Storage Plug-in fails the volume creation activity
when you request a new volume using in-tree provisioner and vSAN raw policy parameters.
n The vSphere Container Storage Plug-in migration requires a compatible version of vSphere.
For information, see Supported Kubernetes Functionality.
n vSphere Container Storage Plug-in does not support formatting volumes with the Windows
file system. In-tree vSphere volumes migrated using the Windows file system cannot be used
with the vSphere Container Storage Plug-in.
n In-tree vSphere volume plug-in relies on the name of the datastore set on the PVs source.
After you enable migration, do not enable Storage DRS or vMotion. If Storage DRS moves a
disk from one datastore to another, further volume operations might break.
VMware by Broadcom 61
Getting Started with VMware vSphere Container Storage Plug-in
n If you use zone and region aware in-tree deployments, upgrade to vSphere Container
Storage Plug-in version 2.4.1 and later.
Before installing vSphere Container Storage Plug-in, add the following section to the vSphere
secret configuration.
failure-
domain.beta.kuberne
tes.io/zone=zone-a
Annotations: ....
For information about deployments with topology, see Deploy vSphere Container Storage
Plug-in with Topology.
VMware by Broadcom 62
Getting Started with VMware vSphere Container Storage Plug-in
Prerequisites
Make sure to use compatible versions of vSphere and Kubernetes. See Supported Kubernetes
Functionality.
Procedure
The following sample deployment YAML file uses version 2.4, but you can substitute it with
a later version of your choice. https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/
release-2.4/manifests/vanilla/vsphere-csi-driver.yaml.
apiVersion: v1
data:
"csi-migration": "true"
.........
kind: ConfigMap
metadata:
name: internal-feature-states.csi.vsphere.vmware.com
namespace: vmware-system-csi
vSphere Container Storage Plug-in does not support provisioning of a volume by specifying
migration specific parameters in the StorageClass. These parameters are added by the
vSphere Container Storage Plug-in translation library, and should not be used in the storage
class directly.
The Validating admission controller prevents you from creating or updating StorageClass
using csi.vsphere.vmware.com as provisioner with these parameters:
n csimigration
n datastore-migrationparam
n diskformat-migrationparam
n hostfailurestotolerate-migrationparam
n forceprovisioning-migrationparam
VMware by Broadcom 63
Getting Started with VMware vSphere Container Storage Plug-in
n cachereservation-migrationparam
n diskstripes-migrationparam
n objectspacereservation-migrationparam
n iopslimit-migrationparam
In addition, the Validating admission controller prevents you from creating or updating
StorageClass using kubernetes.io/vsphere-volume as provisioner with AllowVolumeExpansion
set to true.
As a prerequisite, you must install kubectl, openssl, and base64 commands on the system,
from which you invoke admission webhook installation scripts.
To deploy the admission webhook, download and execute the following file. If needed,
substitute the version number with a version of choice.
https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/release-2.4/manifests/vanilla/
deploy-vsphere-csi-validation-webhook.sh
./deploy-vsphere-csi-validation-webhook.sh
creating certs in tmpdir /var/folders/vy/_6dvxx7j5db9sq9n38qjymwr002gzv/T/tmp.ogj5ioIk
Generating a 2048 bit RSA private key
...........................................................................................
...............................................................+++
...........................................+++
writing new private key to '/var/folders/vy/_6dvxx7j5db9sq9n38qjymwr002gzv/T/tmp.ogj5ioIk/
ca.key'
-----
Generating RSA private key, 2048 bit long modulus
..............................................................+++
...........+++
e is 65537 (0x10001)
Signature ok
subject=/CN=vsphere-webhook-svc.vmware-system-csi.svc
Getting CA Private Key
secret "vsphere-webhook-certs" deleted
secret/vsphere-webhook-certs created
service/vsphere-webhook-svc created
validatingwebhookconfiguration.admissionregistration.k8s.io/
validation.csi.vsphere.vmware.com created
serviceaccount/vsphere-csi-webhook created
role.rbac.authorization.k8s.io/vsphere-csi-webhook-role created
rolebinding.rbac.authorization.k8s.io/vsphere-csi-webhook-role-binding created
deployment.apps/vsphere-csi-webhook created
CSIMigrationvSphere flag enables shims and translation logic to route volume operations
from the vSphere in-tree volume plug-in to vSphere Container Storage Plug-in. It also
supports falling back to in-tree vSphere plug-in if a node does not have vSphere Container
Storage Plug-in installed and configured.
VMware by Broadcom 64
Getting Started with VMware vSphere Container Storage Plug-in
CSIMigrationvSphere requires CSIMigration feature flag to be enabled. This flag enables the
vSphere Container Storage Plug-in migration on the Kubernetes cluster.
a Update kube-controller-manager manifest file and add the following arguments.
`- --feature-gates=CSIMigration=true,CSIMigrationvSphere=true`
featureGates:
CSIMigration: true
CSIMigrationvSphere: true
c Restart the kubelet on the control plane nodes using the following command.
d Verify that the kubelet is functioning correctly using the following command
e For any issues with the kubelet, check the logs on the control plane node using the
following command.
journalctl -xe
VMware by Broadcom 65
Getting Started with VMware vSphere Container Storage Plug-in
a Before you change the configuration on the kubelet on each workload node, drain the
nodes by removing running application workloads.
b To enable migration on the workload nodes, update the kubelet configuration file and add
the folowing flags. The file is available at /var/lib/kubelet/config.yaml.
featureGates:
CSIMigration: true
CSIMigrationvSphere: true
c Restart the kubelet on the workload nodes using the following command.
d Verify that the kubelet is functioning correctly using the following command.
e For any issues with the kubelet, check the logs on the workload node using the following
command.
journalctl -xe
VMware by Broadcom 66
Getting Started with VMware vSphere Container Storage Plug-in
f After you enable the migration, ensure the csinodes instance for the node is updated with
the storage.alpha.kubernetes.io/migrated-plugins annotation.
Note
n Do not uncordon the workload node unless migrated plugins annotation on the
CSINode object for the workload node does not list kubernetes.io/vsphere-volume.
g Uncordon the node after the CSINode object for the node lists kubernetes.io/vsphere-
volume as migrated-plugins.
h Repeat the above steps for all workload nodes in the Kubernetes cluster.
6 (Optional) You can enable the CSIMigrationvSphereComplete flag if you enabled the vSphere
Container Storage Plug-in migration on all nodes.
VMware by Broadcom 67
Getting Started with VMware vSphere Container Storage Plug-in
7 Verify that the vSphere in-tree PVCs and PVs are migrated to vSphere Container Storage
Plug-in and the pv.kubernetes.io/migrated-to: csi.vsphere.vmware.com annotations are
present on PVCs and PVs
Annotations on PVCs:
Annotations on PVs:
After you enable the migration, vSphere Container Storage Plug-in creates a new in-tree
vSphere volume. It can be identified by using the following annotations. The PV specification
continues to hold the vSphere volume path. If you want to deactivate the migration, the
provisioned volume can be used by the in-tree vSphere plug-in.
Annotations on PVCs:
Annotations on PVs:
In addition, follow these requirements to deploy vSphere Container Storage Plug-in with
Windows.
VMware by Broadcom 68
Getting Started with VMware vSphere Container Storage Plug-in
n CSI Proxy v1. Install CSI Proxy on all Windows nodes. For more information, see https://
github.com/kubernetes-csi/csi-proxy#installation
n Worker nodes must have Windows Server 2019. Other versions of Windows Server are not
supported. Windows worker nodes is supported on the following builds.
n Windows-ltsc2022
n Windows-20h2
n Windows-1809
n Containerd version must be 1.5 or higher if you use it in nodes. For more information, see
containerd fails to add a disk mount on Windows.
Note When a pod is created and the volume does not have a filesystem created on it, the
filesystem type supplied from the StorageClass is ignored and the volume gets formatted with
the NTFS file system.
Procedure
For more information, see Install the vSphere Container Storage Plug-in.
VMware by Broadcom 69
Getting Started with VMware vSphere Container Storage Plug-in
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-windows-sc
provisioner: csi.vsphere.vmware.com
allowVolumeExpansion: true # Optional: only applicable to vSphere 7.0U1 and above
parameters:
storagepolicyname: "vSAN Default Storage Policy" # Optional Parameter
#datastoreurl: "ds:///vmfs/volumes/vsan:52cdfa80721ff516-ea1e993113acfc77/" #
Optional Parameter
#csi.storage.k8s.io/fstype: "ntfs" # Optional Parameter
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-windows-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: example-windows-sc
volumeMode: Filesystem
Note The pvc definition for Windows is the same as for Linux. You can specify only
ReadWriteOnce as access modes.
VMware by Broadcom 70
Getting Started with VMware vSphere Container Storage Plug-in
apiVersion: v1
kind: Pod
metadata:
name: example-windows-pod
spec:
nodeSelector:
kubernetes.io/os: windows
containers:
- name: test-container
image: mcr.microsoft.com/windows/servercore:ltsc2019
command:
- "powershell.exe"
- "-Command"
- "while (1) { Add-Content -Encoding Ascii C:\\test\\data.txt $(Get-Date
-Format u); sleep 1 }"
volumeMounts:
- name: test-volume
mountPath: "/test/"
readOnly: false
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: example-windows-pvc
VMware by Broadcom 71
Getting Started with VMware vSphere Container Storage Plug-in
VMware by Broadcom 72
Using vSphere Container Storage
Plug-in 3
vSphere Container Storage Plug-in for Kubernetes supports a number of different Kubernetes
features that include block volumes, file volumes, volume expansion, and so on.
VMware by Broadcom 73
Getting Started with VMware vSphere Container Storage Plug-in
n Without dynamic volume provisioning, cluster administrators have to manually make calls
to their cloud or storage provider to create new storage volumes, and then create
PersistentVolume objects to represent them in Kubernetes.
n The implementation of dynamic volume provisioning is based on the API object StorageClass
from the API group storage.k8s.io.
n A cluster administrator can define and expose multiple types of storage within a cluster by
using custom set of parameters. Types of storage can be from the same or different storage
systems.
For more information on provisioning volumes using topology and use of the
WaitForFirstConsumer volumeBinding mode with dynamic volume provisioning, see Topology-
Aware Volume Provisioning.
Note Support for volume topology is present only in Vanilla Kubernetes for single-access (RWO)
file system based volume.
You can provision a PersistentVolume dynamically on a native Kubernetes cluster by using the
following steps.
Procedure
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-sc
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.vsphere.vmware.com
parameters:
VMware by Broadcom 74
Getting Started with VMware vSphere Container Storage Plug-in
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: example-sc
5 Verify that the PersistentVolumeClaim has been created and has a PersistentVolume
attached to it.
VMware by Broadcom 75
Getting Started with VMware vSphere Container Storage Plug-in
When successfully created, the Status section shows Bound and the Volume field is populated.
In this example, RWO access mode indicates that the volume is created on a vSphere virtual
disk (First Class Disk).
6 Verify that PersistentVolume has been successfully created for the PersistentVolumeClaim.
The Status must display as Bound. You can also see that the Claim is set to the above
PersistentVolumeClaim name example-pvc.
VMware by Broadcom 76
Getting Started with VMware vSphere Container Storage Plug-in
Note Information on this page applies only to vSphere Container Storage Plug-in versions 2.4.1
to 3.0.0. It does not apply to version 3.1.0 and later.
To determine which datastore can be accessed by all nodes, the vSphere Container Storage
Plug-in identifies the ESXi hosts where the nodes are placed. It then identifies the datastores
that are mounted on those ESXi hosts. The vSphere Container Storage Plug-in supplies this
information to the CreateVolume API, which selects the datastore with the highest capacity from
the supplied datastores for volume provisioning.
However, if the nodes are not distributed across all ESXi hosts in the vSphere cluster and are
instead placed on a subset of ESXi hosts, and if that subset of ESXi hosts has some shared
datastores, the volume might get provisioned on those datastores. Later, when you add a new
node to another ESXi host that does not have access to the shared datastore accessible to the
subset of ESXi hosts, the provisioned volume cannot be used on the newly added node.
This situation also applies to topology-aware setups. For example, when an availability zone has
only a single node, the volume might get provisioned on the ESXi host where the node is located.
Later, when you add a new node to the availability zone, the volume provisioned on the local
datastore cannot be used with the new node.
To avoid this situation, use a storage policy in the StorageClass to select a datastore, so that the
cluster can be scaled without any issues. For example, if you have several nodes in the vSphere
cluster and you want to use a datastore that is accessible to all ESXi hosts in the cluster, define
a storage policy that is compliant with that datastore. Then specify the storage policy in the
StorageClass when provisioning a volume. As a result, you can avoid provisioning the volume on
a datastore shared among a few ESXi hosts and a local datastore. And new nodes can be added
easily.
As a cluster administrator, you must know the details of the storage device, its supported
configurations, and mount options.
VMware by Broadcom 77
Getting Started with VMware vSphere Container Storage Plug-in
To make existing storage available to a cluster user, you must manually create the storage
device, a PeristentVolume, and a PersistentVolumeClaim. Because the PV and the storage
device already exist, you do not need to specify a storage class name in the PVC specification.
You can use different ways to create a static PV and PVC binding. For example, label matching,
volume size matching, and so on.
Note Creating multiple PVs for the same volume backed by a vSphere virtual disk (First Class
Disk) in the Kubernetes cluster is not supported.
The common use cases of static volume provisioning include the following.
You have provisioned a persistent storage, First Class Disk (FCD), directly in your vCenter
Server, and want to use this FCD in your cluster.
You have provisioned a volume with a reclaimPolicy:retain parameter in the storage class
by using dynamic provisioning. You have removed the PVC, but the PV, the physical storage
in vCenter Server, and the data still exists. You want to access the retained data from an
application in your cluster.
You have provisioned a PV in a namespace of your cluster. You want to use the same storage
instance for an application pod that is deployed to a different namespace in your cluster.
You have provisioned a PV for your cluster. To share the same persistent storage instance
with other clusters in the same zone, you must manually create the PV and matching PVC in
the other cluster.
Note Sharing persistent storage across clusters is available only if the cluster and the storage
instance are located in the same zone.
Statically provision a Single Access (RWO) Volume backed by vSphere Virtual Disk (First Class
Disk)
VMware by Broadcom 78
Getting Started with VMware vSphere Container Storage Plug-in
Procedure
apiVersion: v1
kind: PersistentVolume
metadata:
name: static-vanilla-rwo-filesystem-pv
annotations:
pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: example-vanilla-rwo-filesystem-sc
claimRef:
namespace: default
name: static-vanilla-rwo-filesystem-pvc
csi:
driver: csi.vsphere.vmware.com
fsType: ext4 # Change fstype to xfs or ntfs based on the requirement.
volumeAttributes:
type: "vSphere CNS Block Volume"
volumeHandle: 0c75d40e-7576-4fe7-8aaa-a92946e2805d # First Class Disk (Improved
Virtual Disk) ID
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: static-vanilla-rwo-filesystem-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: example-vanilla-rwo-filesystem-sc
volumeName: static-vanilla-rwo-filesystem-pv
---
3 Verify that the PVC you imported has been created and the PersistentVolume is attached to
it.
VMware by Broadcom 79
Getting Started with VMware vSphere Container Storage Plug-in
StorageClass:
Status: Bound
Volume: static-pv-name
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 2Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: <none>
Events: <none>
If the operation is successful, the Status section displays Bound and the Volume field is
populated.
If the operation is successful, the PV shows up in the output. You can also see that the
VolumeHandle key is populated. Status shows Bound. You can also see that Claim is set to
static-pvc-name.
The following example explains how XFS file system is used to mount a volume inside the mongo
database application.
VMware by Broadcom 80
Getting Started with VMware vSphere Container Storage Plug-in
Prerequisites
Specify XFS as the csi.storage.k8s.io/fstype under the storage class to use XFS file system to
format and mount volume inside the container.
Note XFS file system support for vSphere Container Storage Plug-in is validated using Ubuntu
20.04.2 LTS with kernel version 5.4.0-66-generic. However, vSphere Container Storage Plug-in is
currently not compatible with the XFS file system on CentOS 7, and Red Hat Enterprise Linux 7
due to issues with xfsprogs 5.8.0-1.ph4 and respective kernels.
Procedure
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-mongo-sc
annotations:
storageclass.kubernetes.io/is-default-class: "true" # Optional
provisioner: csi.vsphere.vmware.com
allowVolumeExpansion: true # Optional: only applicable to vSphere 7.0U1 and above
parameters:
storagepolicyname: "vSAN Default Storage Policy" # Optional Parameter
csi.storage.k8s.io/fstype: "xfs" # Optional Parameter
2 Create PVC.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-mongo-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: example-mongo-sc
b Create PVC.
VMware by Broadcom 81
Getting Started with VMware vSphere Container Storage Plug-in
apiVersion: v1
data:
password: cGFzc3dvcmQxMjM= #password123
username: YWRtaW51c2Vy #adminuser
kind: Secret
metadata:
creationTimestamp:null
name: mongo-creds
VMware by Broadcom 82
Getting Started with VMware vSphere Container Storage Plug-in
4 Deploy mongodb.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mongo
name: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
strategy: {}
template:
metadata:
labels:
app: mongo
spec:
containers:
- image: mongo
name: mongo
args: ["--dbpath","/data/db"]
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongo-creds
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-creds
key: password
volumeMounts:
- name: "mongo-data-dir"
mountPath: "/data/db"
VMware by Broadcom 83
Getting Started with VMware vSphere Container Storage Plug-in
volumes:
- name: "mongo-data-dir"
persistentVolumeClaim:
claimName: "example-mongo-data"
c Verify if the file system used to mount volume inside the mongodb pod is XFS using the
following command.
apiVersion: v1
kind: Service
metadata:
labels:
app: mongo
name: mongo-nodeport-svc
spec:
ports:
- port:27017
protocol: TCP
targetPort:27017
nodePort:32000
selector:
app: mongo
type: NodePort
c To connect to mongodb outside the kubernetes cluster, use the worker node IP address or
a load balancer address.
VMware by Broadcom 84
Getting Started with VMware vSphere Container Storage Plug-in
6 Access mongodb.
db1> db.blogs.find()
[ { _id: ObjectId("63da54d31b1b7f11e9cefb35"), name: 'devopscube' } ]
For information about raw block volume support in Kubernetes, see Raw Block Volume Support.
Certain applications require a direct access to a block device. When using a raw block device
without a file system, Kubernetes can provide a better support to high-performance applications
that are capable of consuming and manipulating block storage for their needs. Such applications
as MongoDB and Cassandra that require consistent I/O performance and low latency can benefit
from the raw block volumes technology and organize their data directly on the underlying
storage.
VMware by Broadcom 85
Getting Started with VMware vSphere Container Storage Plug-in
Requirements
When you use raw block volumes with vSphere Container Storage Plug-in, follow these
guidelines and requirements.
n Use only single-access ReadWriteOnce raw block volumes. vSphere Container Storage Plug-in
does not support raw block volume that use the ReadWriteMany access mode.
Procedure
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-raw-block-sc
provisioner: csi.vsphere.vmware.com
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-raw-block-pvc
spec:
volumeMode: Block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: example-raw-block-sc
Procedure
apiVersion: v1
kind: Pod
metadata:
name: example-raw-block-pod
VMware by Broadcom 86
Getting Started with VMware vSphere Container Storage Plug-in
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox:1.24
command: ["/bin/sh", "-c", "while true ; do sleep 2 ; done"]
volumeDevices:
- devicePath: /dev/xvda
name: data
restartPolicy: Never
volumes:
- name: data
persistentVolumeClaim:
claimName: example-raw-block-pvc
Note File volumes do not work in conjunction with functionalities such as extend volume and
encryption.
n Verify that your vSphere environment meets necessary requirements, see Compatibility
Matrices for vSphere Container Storage Plug-in, Supported Kubernetes Functionality, and
Configuration Maximums for vSphere Container Storage Plug-in.
n Enable and configure the file service in your vSAN cluster configuration to create file share
volumes, configure the required file service domains, IP pools, network. For more information,
see vSAN File Service.
n Establish a dedicated file share network connecting all the Kubernetes nodes. Ensure this
network is routable to the vSAN File Share network. For more information, see Configuring
Network Access to vSAN File Share.
n Configure the Kubernetes nodes to use the DNS server that was used to configure the file
services in the vSAN cluster. This helps the nodes to resolve the file share access points with
Fully Qualified Domain Name (FQDN) while mounting the file volume to the pod.
VMware by Broadcom 87
Getting Started with VMware vSphere Container Storage Plug-in
You can retrieve the vSAN file service DNS configuration by navigating to the Configure tab
of your vSAN cluster and clicking File Service.
VMware by Broadcom 88
Getting Started with VMware vSphere Container Storage Plug-in
n If you have file shares shared across more than one clusters in your vCenter Server, deleting
a PVC with the Delete reclaim policy in one cluster might delete the underlying file share. This
action might cause the volume to be unavailable for the rest of the clusters.
n After you finish setting up file services, get started with vSphere CSI file services integration
with your applications. For more information, see a video on Cloud Native Storage and vSAN
File Services Integration. For Storage Class, PersistentVolumeClaim, PersistentVolume and
Pod specs, see Try-out YAMLs.
n vSphere Container Storage Plug-in can mount vSAN file service volumes only with NFS 4.1.
NFS 3 is not supported.
n When you request a RWX volume in Kubernetes, vSAN File Service creates an NFS based file
share of the requested size and appropriate SPBM policy. One vSAN file share is created per
a RWX volume. VMware supports 100 shares per vSAN File Service cluster, which means you
can have no more than 100 RWX volumes.
Prerequisites
n Set up your environment with vSAN File Service enabled. See Requirements for File Volumes
with vSphere Container Storage Plug-in.
n For additional information about integrating vSphere Container Storage Plug-in file services
with your applications, see Cloud Native Storage and vSAN File Services Integration.
n Use sample YAML files for Storage Class, PersistentVolumeClaim, PersistentVolume, and Pod
specs. For more information, see the following links.
n Block Volume.
Procedure
VMware by Broadcom 89
Getting Started with VMware vSphere Container Storage Plug-in
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-vanilla-file-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: example-vanilla-file-sc
Optionally, you can describe the corresponding PV after the PVC is bound.
The output is similar to the following:
Name: pvc-45cea491-8399-11ea-883a-005056b61591
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
Finalizers: [kubernetes.io/pv-protection]
StorageClass: example-vanilla-file-sc
Status: Bound
Claim: default/example-vanilla-file-pvc
Reclaim Policy: Delete
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: csi.vsphere.vmware.com
VolumeHandle: file:53bf6fb7-fe9f-4bf8-9fd8-7a589bf77760
ReadOnly: false
VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1587430348006-8081-
csi.vsphere.vmware.com
type=vSphere CNS File Volume
Events: <none>
The VolumeHandle associated with the PV contains the file: prefix, which indicates that it is a
file volume.
VMware by Broadcom 90
Getting Started with VMware vSphere Container Storage Plug-in
Option Description
To read the same file share from multiple pods, specify the PVC associated
with the file share in the ClaimName in all pod specifications.
Create a pod with Read-Only access Specify readOnly as true in the persistentVolumeClaim section. Setting just
to the PVC. the accessModes to ReadOnlyMany in the PVC specification is not sufficient to
make the PVC Read-Only to the pods.
apiVersion: v1
kind: Pod
metadata:
name: example-vanilla-file-pod2
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox:1.24
command: ["/bin/sh", "-c", "while true ; do sleep
2 ; done"]
volumeMounts:
- name: test-volume
mountPath: /mnt/volume1
restartPolicy: Never
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: example-vanilla-file-pvc
readOnly: true
If you access this pod and try to create a file in the mountPath, which
is /mnt/volume1, you get an error.
VMware by Broadcom 91
Getting Started with VMware vSphere Container Storage Plug-in
vSphere Container Storage Plug-in supports only vSAN file service volumes that are created with
a hard quota limit. File service volumes that do not have a hard quota limit are not supported.
Procedure
apiVersion: v1
kind: PersistentVolume
metadata:
name: static-file-share-pv-name
annotations:
pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
labels:
"static-pv-label-key": "static-pv-label-value"
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
csi:
driver: "csi.vsphere.vmware.com"
volumeAttributes:
type: "vSphere CNS File Volume"
"volumeHandle": "file:236b3e6b-cfb0-4b73-a271-2591b2f31b4c"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: static-file-share-pvc-name
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
static-pv-label-key: static-pv-label-value
storageClassName: ""
VMware by Broadcom 92
Getting Started with VMware vSphere Container Storage Plug-in
n Ensure to retain the file: prefix of the vSAN file share while filling up the volumeHandle
field in the PV specification.
Note For file volumes, CNS supports multiple PVs that refer to the same file-share volume.
In addition, you can use the volumeBindingMode parameter in the StorageClass to specify when
the volume should be created and bound to the PVC request. vSphere Container Storage Plug-in
supports two volume binding modes that Kubernetes provides.
Immediate
This is the default volume binding mode. The mode indicates that volume binding and
dynamic provisioning occur immediately after the PersistentVolumeClaim is created. To
deploy workloads with Immediate binding mode in topology-aware environment, you can
specify zone parameters in the StorageClass.
WaitForFirstConsumer
This mode delays the creation and binding of a persistent volume for a PVC until a pod that
uses the PVC is created. When you use this mode, you do not need to specify StorageClass
zone parameters because pod policies drive the decision of which zones to use for volume
provisioning.
Before deploying workloads with topology, enable topology in the native Kubernetes cluster in
your vSphere environment. For more information, see Deploy vSphere Container Storage Plug-in
with Topology.
VMware by Broadcom 93
Getting Started with VMware vSphere Container Storage Plug-in
in other availability zones. Even if you choose a zone for the file volumes, applications across
different availability zones can still access these file volumes.
To deploy workloads with Immediate binding mode in a topology-aware environment, you must
specify zone parameters in the StorageClass.
Prerequisites
Enable topology in the native Kubernetes cluster in your vSphere environment. For more
information, see Deploy vSphere Container Storage Plug-in with Topology.
Procedure
When you do not specify the volume binding mode, it is Immediate by default.
You can define network permissions in the VSPHERE_CSI_CONFIG secret to restrict volume
provisioning only in specific networks. To define network permissions, see Create a
Kubernetes Secret for vSphere Container Storage Plug-in.
You can also specify zone parameters. In the following example, the StorageClass can
provision volumes on either zone-a or zone-b.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-multi-zones-sc
provisioner: csi.vsphere.vmware.com
allowedTopologies:
- matchLabelExpressions:
- key: topology.csi.vmware.com/k8s-region
values:
- region-1
- key: topology.csi.vmware.com/k8s-zone
values:
- zone-a
- zone-b
parameters:
csi.storage.k8s.io/fstype: "nfs4"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-multi-zones-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
storageClassName: example-multi-zones-sc
VMware by Broadcom 94
Getting Started with VMware vSphere Container Storage Plug-in
4 Verify that the PV that got provisioned does not have any node affinity rules on it.
Name: pvc-012cc523-03f0-45ea-9213-883362436591
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
Finalizers: [kubernetes.io/pv-protection]
StorageClass: example-multi-zones-sc
Status: Bound
Claim: default/example-multi-zones-pvc
Reclaim Policy: Delete
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 100Mi
Node Affinity: <none>
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: csi.vsphere.vmware.com
FSType: nfs4
VolumeHandle: file:fd60964d-d956-42bf-8fe5-37534dc4861a
ReadOnly: false
VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1704350366618-902-
csi.vsphere.vmware.com
type=vSphere CNS File Volume
Events: <none>
apiVersion: v1
kind: Pod
metadata:
name: example-multi-zones-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox:1.24
command: ["/bin/sh", "-c", "echo 'hello' > /mnt/volume1/index.html && chmod
o+rX /mnt /mnt/volume1/index.html && while true ; do sleep 2 ; done"]
volumeMounts:
- name: test-volume
mountPath: /mnt/volume1
restartPolicy: Never
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: example-multi-zones-pvc
VMware by Broadcom 95
Getting Started with VMware vSphere Container Storage Plug-in
Results
The pod may or may not get scheduled in the zone where the volume has been provisioned.
Prerequisites
Enable topology in the native Kubernetes cluster in your vSphere environment. For more
information, see Deploy vSphere Container Storage Plug-in with Topology.
Procedure
When you do not specify the volume binding mode, it is Immediate by default.
You can also specify zone parameters. In the following example, the StorageClass can
provision volumes on either zone-a or zone-b.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-multi-zones-sc
provisioner: csi.vsphere.vmware.com
allowedTopologies:
- matchLabelExpressions:
- key: topology.csi.vmware.com/k8s-region
values:
- region-1
- key: topology.csi.vmware.com/k8s-zone
values:
- zone-a
- zone-b
apiVersion: v1
kind: PersistentVolumeClaim
VMware by Broadcom 96
Getting Started with VMware vSphere Container Storage Plug-in
metadata:
name: example-multi-zones-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
storageClassName: example-multi-zones-sc
4 Verify that the PV node affinity rules include at least one domain within zone-a or zone-b
depending on whether the selected datastore is local or shared across zones.
apiVersion: v1
kind: Pod
VMware by Broadcom 97
Getting Started with VMware vSphere Container Storage Plug-in
metadata:
name: example-multi-zones-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox:1.24
command: ["/bin/sh", "-c", "echo 'hello' > /mnt/volume1/index.html && chmod
o+rX /mnt /mnt/volume1/index.html && while true ; do sleep 2 ; done"]
volumeMounts:
- name: test-volume
mountPath: /mnt/volume1
restartPolicy: Never
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: example-multi-zones-pvc
You can notice that the pod is scheduled in to the zone where volume has been provisioned.
In this example, it is zone-b.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-multi-zones-sc
provisioner: csi.vsphere.vmware.com
parameters:
storagepolicyname: "shared datastore zones A and B"
allowedTopologies:
- matchLabelExpressions:
- key: topology.csi.vmware.com/k8s-region
values:
- region-1
VMware by Broadcom 98
Getting Started with VMware vSphere Container Storage Plug-in
- key: topology.csi.vmware.com/k8s-zone
values:
- zone-a
- zone-b
Prerequisites
Enable topology in the native Kubernetes cluster in your vSphere environment. For more
information, see Deploy vSphere Container Storage Plug-in with Topology
Procedure
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: topology-aware-file-volume
provisioner: csi.vsphere.vmware.com
volumeBindingMode: WaitForFirstConsumer
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
replicas: 2
VMware by Broadcom 99
Getting Started with VMware vSphere Container Storage Plug-in
selector:
matchLabels:
app: nginx
serviceName: nginx
template:
metadata:
labels:
app: nginx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.csi.vmware.com/k8s-zone
operator: In
values:
- zone-a
- zone-b
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: topology.csi.vmware.com/k8s-zone
containers:
- name: nginx
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
- name: logs
mountPath: /logs
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: topology-aware-file-volume
resources:
requests:
storage: 2Gi
- metadata:
name: logs
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: topology-aware-file-volume
resources:
requests:
storage: 1Gi
3 Verify that the statefulset is in the Running state and check that the pods are evenly
distributed among the zone-a and zone-b.
Note that the node affinity rules are not published on the PV.
Prerequisites
Enable topology in the native Kubernetes cluster in your vSphere environment. For more
information, see Deploy vSphere Container Storage Plug-in with Topology.
Procedure
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: topology-aware-standard
provisioner: csi.vsphere.vmware.com
volumeBindingMode: WaitForFirstConsumer
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
replicas: 2
selector:
matchLabels:
app: nginx
serviceName: nginx
template:
metadata:
labels:
app: nginx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.csi.vmware.com/k8s-zone
operator: In
values:
- zone-a
- zone-b
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: topology.csi.vmware.com/k8s-zone
containers:
- name: nginx
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
- name: logs
mountPath: /logs
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: topology-aware-standard
resources:
requests:
storage: 2Gi
- metadata:
name: logs
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: topology-aware-standard
resources:
requests:
storage: 1Gi
3 Verify that the statefulset is in the Running state and check that the pods are evenly
distributed among the zone-a and zone-b.
Notice that the PV node affinity rules include at least one domain within zone-a or zone-b
depending on whether the selected datastore is local or shared across zones.
You can use this functionality in an environment with a replicated datastore that is shared across
two topology domains, or sites. The datastore is active in one of the topology domains, and is
passive in the other.
When a site failure occurs, the active datastore on the failed site becomes passive, and the
passive datastore on the other site becomes active.
In the following diagram, the DS-1 datastore is active in Site 1 and passive in Site 2. The DS-2
datastore is active in Site 2 and passive in Site 1.
Both datastores, DS-1 and DS-2, are accessible to all nodes in both sites. A typical volume
provisioning request for Site 1 would provision a volume on either DS-1 or DS-2.
You can set preference to a particular datastore for a site, so that volume provisioning is limited
only to the active datastore.
Storage replication/mirroring
Storage replication/mirroring
In this example, the DS-1 datastore is set as a preferred datastore for Site 1 and DS-2 datastore is
a preferred datastore for Site 2.
Prerequisites
n Ensure that the vSphere Container Storage Plug-in version is 2.6.1 or later.
Procedure
cns.vmware.topology-preferred-datastores.
2 Create tags under this category that match the tags that you used for the topology domain
name. For example,
n For Site 1, if the topology domain name is site-1, create a site-1 tag under the
cns.vmware.topology-preferred-datastores category.
n For Site 2, if the topology domain name is site-2, create a site-2 tag under the
cns.vmware.topology-preferred-datastores category.
3 Assign the site-1 and site-2 tags created under the cns.vmware.topology-preferred-
datastores category to the DS-1 and DS-2 datastore correspondingly.
During a datastore failover scenario, workload node VMs created on a specific datastore
become inaccessible. The API server objects for these nodes are deleted from the
Kubernetes cluster by CPI. To fix this issue, it is recommended to install vSphere Cloud
Provider v1.24.2 or later, if you are using Kubernetes version 1.24.
Results
After you set the preference on the datastores, any volume provisioning request for site-1
ensures that the volume is allocated on the DS-1 datastore.
What to do next
The CSI driver periodically acknowledges any modifications to preferred datastore configurations
every 5 minutes. If required, you can expedite these changes by restarting the vSphere CSI
controller pods.
In the following diagram, the Kubernetes cluster spans across three vCenter Server instances
that represent different availability zones. Kubernetes control plane nodes and worker nodes are
distributed across these three zones. A volume provisioning request with zone1, specified in a
topology requirement, provisions the volume on vSAN1 or VMFS1 datastore connected to VC1.
For more information, see Deploying vSphere Container Storage Plug-in with Multiple vCenter
Server Instances.
Note vSphere Container Storage Plug-in does not support datastores shared across multiple
vCenter Server instances.
n In the storage class, specify topology segments, such as regions or zones, for only a single
vCenter Server.
If topology segments in the storage class span across more than one vCenter Server,
provisioning of a corresponding volume fails.
The following is an example of a storage class used for dynamic provisioning of a block or a
file volume. Provisioning is done from one of the three vCenter Server instances shown in the
diagram. In this example, the storage class indicates zone-1, which means that only VC-1 can
initiate the provisioning.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-multi-zones-sc
provisioner: csi.vsphere.vmware.com
allowedTopologies:
- matchLabelExpressions:
- key: topology.csi.vmware.com/k8s-zone
values:
- zone-1
To provision file volumes statically in an environment with multiple vCenter Server instances,
follow the same steps required for provisioning static file volumes in a single vCenter Server.
See Statically Provision File Volumes with vSphere Container Storage Plug-in.
The CSI driver can identify the vCenter Server where the backing file share is located and
create a CNS volume for the same.
To provision block volumes statically in the environment with multiple vCenter Server
instances, follow the same steps required for provisioning topology-aware volumes. For
more information on topology-aware volume provisioning, see Topology-Aware Volume
Provisioning.
Specify affinity rules for a PersistentVolume object in the nodeAffinity section. The affinity
rules indicate the topology segment and vCenter Server, to which the volume belongs. If you
do not specify the affinity rules the in PersistentVolume object, volume registration fails.
apiVersion: v1
kind: PersistentVolume
metadata:
name: static-pv-name
annotations:
pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
labels:
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
claimRef:
namespace: default
name: static-pvc-name
csi:
driver: "csi.vsphere.vmware.com"
volumeAttributes:
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.csi.vmware.com/k8s-zone
operator: In
values:
- zone-2
Kubernetes supports offline and online modes of volume expansion. When the PVC is used by
a pod and is mounted on a node, the volume expansion operation is categorized as online. In
all other cases, it is an offline expansion. For information about vSphere versions that support
volume expansion, see Supported Kubernetes Functionality.
Expanding volume is not supported when volume snapshot is present, and when a Node VM
snapshot is present with the volume attached to it.
Feature Gate
Expand CSI Volumes feature is enabled by default since it was promoted to beta in
Kubernetes 1.16. For Kubernetes releases before 1.16, enable Expand CSI Volumes feature
gate to support volume expansion in vSphere Container Storage Plug-in.
Sidecar Container
An external resizer sidecar container implements the logic of watching the Kubernetes API for
PVC edits, issuing the ControllerExpandVolume RPC call against a CSI endpoint, and updating
the PersistentVolume object to reflect the new size. This container is already deployed as
part of the vsphere-csi-controller pod.
n vSphere Container Storage Plug-in does not support expansion of a volume backed up by
vSAN file service.
Procedure
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-block-sc
provisioner: csi.vsphere.vmware.com
allowVolumeExpansion: true
Ensure that the PVC is in bound state. If you are using a statically provisioned PVC,
ensure that the PVC and the PV specifications have the storageClassName parameter
pointing to the StorageClass with the allowVolumeExpansion set to true.
a Patch the PVC to increase the storage size, in this example, to 2Gi.
This action triggers an expansion in the volume associated with the PVC in vSphere Cloud
Native Storage.
The output looks similar to the following. The PVC shows the increase in size after the
volume underneath is expanded.
Note
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY
STATUS CLAIM STORAGECLASS REASON AGE
pvc-84c89bf9-8455-4633-a8c8-cd623e155dbd 2Gi RWO Delete
Bound default/example-block-pvc example-block-sc 25m
Procedure
1 Create a PVC.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-block-sc
provisioner: csi.vsphere.vmware.com
allowVolumeExpansion: true
Ensure that the PVC is in bound state. If you are using a statically provisioned PVC,
ensure that the PVC and the PV specifications have the storageClassName parameter
pointing to the StorageClass with the allowVolumeExpansion set to true.
a Patch the PVC to increase the storage size, in this example, to 2Gi.
This action triggers an expansion in the volume associated with the PVC. The capacity
of the corresponding PV object changes. However, the capacity of the PVC does not
change until the PVC is used by a pod and mounted on a node.
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY
STATUS CLAIM STORAGECLASS REASON AGE
pvc-9e9a325d-ee1c-11e9-a223-005056ad1fc1 2Gi RWO Delete
Bound default/example-block-pvc example-block-sc 6m44s
You can also see a FilesystemResizePending condition applied on the PVC when you
describe it.
apiVersion: v1
kind: Pod
metadata:
name: example-block-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox:1.24
command: ["/bin/sh", "-c", "echo 'hello' > /mnt/volume1/index.html && chmod
o+rX /mnt /mnt/volume1/index.html && while true ; do sleep 2 ; done"]
volumeMounts:
- name: test-volume
mountPath: /mnt/volume1
restartPolicy: Never
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: example-block-pvc
The Kubelet on the node triggers the filesystem expansion on the volume when the PVC
is attached to the pod.
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY
STATUS CLAIM STORAGECLASS REASON AGE
pvc-24114458-9753-428e-9c90-9f568cb25788 2Gi RWO Delete
Bound default/example-block-pvc example-block-sc 2m3s
vSAN stretched clusters support file volumes backed by vSAN file shares. For more information,
see Provisioning File Volumes with vSphere Container Storage Plug-in.
Prerequisites
When you plan to configure a Kubernetes cluster on a vSAN stretched cluster, consider the
following items:
n A generic Kubernetes cluster does not enforce the same storage policy on the node VMs and
on the persistent volumes. The vSphere administrator is responsible for the correct storage
policy configuration, assignment, and use of the storage policies within the Kubernetes
clusters.
n Use the VM storage policy with the same replication and site affinity settings for all storage
objects on the Kubernetes cluster. The same storage policy should be used for all node VMs,
including the control plane and worker, and all PVs.
n The topology feature cannot be used to provision a volume that belongs to a specific fault
domain within the vSAN stretched cluster.
Procedure
For more information, search for vSAN stretched cluster on the VMware vSAN
Documentation site.
d Enable host monitoring and configure host failure response, response for host isolation,
and VM monitoring.
Note VMware recommends you to disable VM Component Protection (VMCP) when all
Node VMs and Volumes are deployed on the vSAN Datastore.
2 Create a VM storage policy compliant with the vSAN stretched cluster requirements.
Select Dual site mirroring to have data mirrored at both sites of the stretched cluster.
For the stretched cluster, the setting defines the number of disk or host failures a storage
object can tolerate for each of the site. The number of required fault domains, or hosts
within a site for the stretched cluster, in order to tolerate n failures is 2n + 1 for
mirroring.
Raid-1 mirroring provides better performance. Raid-5 and Raid-6 achieve failure
tolerance using parity blocks, which provides better space efficiency. These options are
available only on all-flash clusters.
3 Create VM-Host affinity rules to place Kubernetes nodes on specific primary or secondary
site, such as Site-A.
For information about affinity rules, see Create a VM-Host Affinity Rule in the vSphere
Resource Management documentation.
4 Deploy Kubernetes VMs using the vSAN stretched cluster storage policy.
5 Create a storage class using the vSAN stretched cluster storage policy.
6 Deploy persistent volumes using the vSAN stretched cluster storage class.
What to do next
Depending on your needs and environment, you can use one of the following deployment
scenarios when deploying your Kubernetes cluster and workloads on the vSAN stretched cluster.
Deployment 1
In this deployment, the control plane and worker nodes are placed on the primary site, but
flexible enough to failover on another site, if the primary site fails. You deploy HA Proxy on the
primary site. This is also known as an Active-Passive deployment because only one site of the
stretched vSAN cluster is used to deploy VMs.
If you plan to use file volumes (RWX volumes), it is recommended to configure the vSAN file
service domain to place file servers on the active site (preferred site). This reduces the cross-site
traffic latency and delivers better performance for applications using file volumes.
Node Placement n The control plane and worker nodes are on the primary site. They are
flexible enough to failover to another site, if the primary site fails.
n HA Proxy on the primary site.
DRS Enabled
vSphere HA Enabled
Scenario Description
Several ESXi hosts fail on the n Kubernetes node VMs move from unavailable hosts to the available hosts within
primary site. primary sites.
n If the worker node needs to be restarted, pods running on that node can be
rescheduled and recreated on another node.
n If the control plane node needs to be restarted, the existing application workload
does not get affected.
The entire primary site and all n Kubernetes node VMs move from the primary site to the secondary site.
hosts on the site fail. n You experience a complete downtime until node VMs restart on the secondary
site.
Several hosts fail on the The failure does not affect the Kubernetes cluster because the entire cluster is at the
secondary site. primary site.
The entire secondary site and all n The failure does not affect the Kubernetes cluster because the entire cluster is at
hosts on the site fail. the primary site.
n Replication for storage objects stops because the secondary site is not available.
Intersite network failure occurs. n The failure does not affect the Kubernetes cluster because the entire cluster is at
the primary site.
n Replication for storage objects stops because the secondary site is not available.
Deployment 2
With this model, place the control plane nodes on the primary site and worker nodes can be
spread across the primary and secondary site. You deploy HA Proxy on the primary site.
DRS Enabled
vSphere HA Enabled
Scenario Description
Several ESXi hosts fail on the n Kubernetes node VMs move from unavailable hosts to the available hosts within
primary site. the same site. If resources are not available, they move to anther site.
n If the worker node needs to be restarted, pods running on that node might be
rescheduled and recreated on another node.
n If the control plane node needs to be restarted, the existing application workload
does not get affected.
The entire primary site and all n Kubernetes control plane node VMs and some worker nodes present on the
hosts on the site fail. primary site move from the primary site to the secondary site.
n Expect the control plane downtime until the control plane nodes restart on the
secondary site.
n Expect partial downtime for pods running on the worker nodes on the primary
site.
n Pods deployed on the worker nodes on the secondary site are not affected.
Several hosts fail on the Node VMs and pods running on the node VMs restart on another host.
secondary site.
The entire secondary site and all n Kubernetes control plane is unaffected.
hosts on the site fail. n Kubernetes control plane nodes move to the primary site.
n Pods deployed on the worker nodes on the secondary site are affected. They
restart along with node VMs.
Deployment 3
In this deployment model, you can place two control plane nodes on the primary site and one
control plane node on the secondary site. Deploy HA Proxy on the primary site. Worker nodes
can be on any site.
This model requires specific DRS policy rules. One rule to specify affinity between two control
plane nodes and the primary site and another rule for affinity between the third control plane
node and the secondary site.
Requirements Parameters
DRS Enabled
vSphere HA Enabled
Scenario Description
Several ESXi hosts fail on the n Affected nodes get restarted on the available host on the primary site.
primary site. n If both control plane nodes are present on the failed host on the primary site, the
control plane will be down until both control plane node recover on the available
hosts on the primary site.
n While nodes are restarting on available hosts, pods might get rescheduled and
recreated on available nodes.
The entire primary site and all n Node VMs move from the primary site to the secondary site.
hosts on the site fail. n Expect a downtime until node VMs restart on the secondary site.
Several hosts fail on the n Node VMs and pods running on the node VMs restart on another host.
secondary site. n If a control plane node on the secondary site is affected, Kubernetes control
plane remains unaffected. Kubernetes remains accessible through two master
nodes on the primary site.
Scenario Description
The entire secondary site and all n The control plane node and worker nodes from the secondary site are migrated
hosts on the site fail. to the primary site.
n Pods deployed on the worker nodes on the secondary site are affected. They
restart along with the node VMs.
Procedure
1 Edit existing VM storage policy used for provisioning volumes and node VMs on the vSAN
cluster to add stretched cluster parameters.
3 Apply updated storage policy on the persistent volumes that have Out of date status.
HCI Mesh is a software based approach for disaggregation of compute and storage resources
in vSAN. It brings multiple independent vSAN clusters together by enabling cross cluster use of
remote datastore capacity within vCenter Server. HCI Mesh enables you to efficiently use and
consume data center resources, which provides simple storage management at scale. You can
create a HCI Mesh by mounting remote vSAN datastores on vSAN clusters, and enable data
sharing from vCenter Server.
AZ 1 AZ 2 AZ 3
Kubernetes Cluster
vSAN
Datastore
vSAN Storage
Cluster
Note HCI Mesh does not support remote vSAN datastores on stretched clusters. For more
information on sharing remote datastores with HCI Mesh, see Sharing Remote Datastores with
HCI Mesh.
n If you have a SPBM policy that is compatible with all the vSAN datastores in a HCI Mesh
deployment, you can use it in StorageClasses in the Kubernetes cluster. However, vSphere
Container Storage Plug-in will select a local vSAN datastore or a remote vSAN datastore for
volume placement. If there is a difference in data path performance between the two types
of datastores, and if you want to offer two different SLAs for your applications, you can
create two separate policies and storage classes.
n If you have two vSAN clusters and have mounted a remote vSAN datastore on one of the
clusters, you can create two SPBM policies per cluster. One policy is assigned for the local
vSAN datastore, and the other one is assigned for the remote vSAN datastore. Later, create
two StorageClass objects in the Kubernetes cluster, one for each policy. This allows you to
assign different SLAs for the two datastores. Refer to the following command to create a
SPBM policy for every vSAN cluster.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: block-volume-local-vsan-cluster
provisioner: csi.vsphere.vmware.com
parameters:
storagepolicyname: "local-vsan-cluster-policy"
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: block-volume-remote-vsan-cluster
provisioner: csi.vsphere.vmware.com
parameters:
storagepolicyname: "remote-vsan-cluster-policy"
Limitations
The following limitations apply when you use vSphere Container Storage Plug-in with HCI Mesh
deployment.
1 vSphere Container Storage Plug-in does not support RWX and File Volumes on HCI Mesh
deployments.
2 All objects that comprise a VM must reside on the same datastore when you use HCI Mesh
deployment. For more information, see Sharing Remote Datastores with HCI Mesh.
3 vSphere Container Storage Plug-in version 3.1.0 does not support vSAN stretch in
combination with HCI Mesh. For more information, see Sharing Remote Datastores with HCI
Mesh.
In addition, follow these requirements to use the volume snapshot and restore feature with
vSphere Container Storage Plug-in:
Procedure
1 Install the version of vSphere Container Storage Plug-in that supports volume snapshots.
See Supported Kubernetes Functionality and Deploying the vSphere Container Storage Plug-
in on a Native Kubernetes Cluster.
2 Deploy the required components using the following script available at:
https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/master/manifests/vanilla/deploy-
csi-snapshot-components.sh
To obtain a detailed workflow of the script, check out by running bash deploy-csi-snapshot-
components.sh -h command.
% ./deploy-csi-snapshot-components.sh
No existing snapshot-controller Deployment found, deploying it now..
Start snapshot-controller deployment...
customresourcedefinition.apiextensions.k8s.io/
volumesnapshotclasses.snapshot.storage.k8s.io created
Created CRD volumesnapshotclasses.snapshot.storage.k8s.io
customresourcedefinition.apiextensions.k8s.io/
volumesnapshotcontents.snapshot.storage.k8s.io created
Created CRD volumesnapshotcontents.snapshot.storage.k8s.io
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io
created
Created CRD volumesnapshots.snapshot.storage.k8s.io
✅ Deployed VolumeSnapshot CRDs
serviceaccount/snapshot-controller unchanged
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner unchanged
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role unchanged
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection unchanged
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection unchanged
✅ Created RBACs for snapshot-controller
deployment.apps/snapshot-controller created
deployment.apps/snapshot-validation-deployment patched
Waiting for deployment spec update to be observed...
Waiting for deployment spec update to be observed...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 0 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 0 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 1 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 1 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 1 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 1 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 1 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 2 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 2 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 2 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 2 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 1 old replicas
are pending termination...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 1 old replicas
are pending termination...
deployment "snapshot-validation-deployment" successfully rolled out
are available...
deployment "vsphere-csi-controller" successfully rolled out
Note
n snapshot-validation-deployment 5.0.1 validation webhook is also deployed as a part of
the deployment script.
n If the component version number is incorrect, the deployment fails with an error message.
n If there is a mismatch in the existing component version number, manually upgrade the
component, or delete it. After deletion, the script will deploy the appropriate version of
the component.
The configuration parameters are listed below. You must configure the parameters only when the
default constraint does not work for your use cases. Otherwise, you can skip the configuration
steps.
Parameter Description
Prerequisites
n For a better performance, use two to three snapshots per virtual disk. For more information,
see Best practices for using VMware snapshots in the vSphere environment.
n Ensure that the maximum number of snapshots per volume is configurable, and its default is
set to three.
Note
n The best practice guideline applies only to virtual disks on VMFS and NFS datastores while
not to those on Virtual Volumes and vSAN datastores.
n Granular configuration parameters are introduced apart from the global configuration
parameter.
Procedure
2 Update the config file of vSphere Container Storage Plug-in and add configuration
parameters for the snapshot feature under the [Snapshot] section.
$ cat /etc/kubernetes/csi-vsphere.conf
[Global]
...
[Snapshot]
global-max-snapshots-per-block-volume = 5 # optional, set to 3 if unset
granular-max-snapshots-per-block-volume-vsan = 7 # optional, fall back to the global
constraint if unset
granular-max-snapshots-per-block-volume-vvol = 8 # optional, fall back to the global
constraint if unset
...
The following is an example to create a storageclass volume snapshot. Optional parameters are
commented.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-vanilla-rwo-filesystem-sc
annotations:
n When you create a PVC from a VolumeSnapshot, it should reside on the same datastore
as the original VolumeSnapshot. Otherwise, the provisioning of that PVC will fail with the
following error:
failed to provision volume with StorageClass "vmfs12": rpc error: code = Internal
desc = failed to create volume. Error: failed to get the compatible datastore for
create volume from snapshot 0a3ce642-2c19-4d50-9534-7889b2a6db52+fc01aaa4-29d8-4a68-90ba-
b1d53bb0657d with error: failed to find datastore with URL "ds:///vmfs/volumes/
62fd07ba-4b18326e-137a-1c34da62fa18/" from the input datastore list, [Datastore:datastore-
33 Datastore:datastore-34]
Note The datastore for the target PVC that you create from the VolumeSnapshot is
determined by the StorageClass in the PVC definition. Make sure that the StorageClass of
the target PVC and the StorageClass of the original source PVC point to the same datastore,
which is the datastore of the source PVC. This rule also applies to the topology requirements
in the StorageClass definitions. The requirements must also point to the same common
datastore. Any conflicting topology requirements result in the same error as shown above.
n You cannot delete or expand a volume that contains associated snapshots. Delete all
snapshots to expand or delete a volume.
n When you create a volume from a snapshot, ensure that the size of the volume matches the
size of the snapshot.
Procedure
1 Create a StorageClass.
2 Create a PVC.
3 Create a VolumeSnapshotClass.
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: example-vanilla-rwo-filesystem-snapshotclass
driver: csi.vsphere.vmware.com
deletionPolicy: Delete
$ kubectl apply -f example-snapshotclass.yaml
$ kubectl get volumesnapshotclass
NAME DRIVER DELETIONPOLICY
AGE
example-vanilla-rwo-filesystem-snapshotclass csi.vsphere.vmware.com Delete 4s
4 Create a VolumeSnapshot.
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: example-vanilla-rwo-filesystem-snapshot
spec:
volumeSnapshotClassName: example-vanilla-rwo-filesystem-snapshotclass
source:
persistentVolumeClaimName: example-vanilla-rwo-pvc
$ kubectl apply -f example-snapshot.yaml
$ kubectl get volumesnapshot
NAME READYTOUSE SOURCEPVC
SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS
SNAPSHOTCONTENT CREATIONTIME AGE
example-vanilla-rwo-filesystem-snapshot true example-vanilla-rwo-
pvc 5Gi example-vanilla-rwo-filesystem-snapshotclass
snapcontent-a7c00b7f-f727-4010-9b1a-d546df9a8bab 57s 58s
Note For more information on the yaml files mentioned in the above steps,
see https://github.com/kubernetes-sigs/vsphere-csi-driver/tree/master/example/vanilla-k8s-
RWO-filesystem-volumes.
Prerequisites
Note
n Pre-provisioned CSI snapshots are supported for CNS/FCD snapshots created using
Kubernetes VolumeSnapshot APIs for vSphere 7.0 Update 3 and later.
n Pre-provisioned CSI snapshots are not supported for FCD snapshots created using FCD
APIs directly.
n Construct the snapshot handle based on the combination of FCD Volume ID and
FCD Snapshot ID of the snapshot. For example, if the FCD Volume ID and FCD
Snapshot ID for a FCD snapshot are 4ef058e4-d941-447d-a427-438440b7d306 and 766f7158-
b394-4cc1-891b-4667df0822fa, the snapshot handle constructed is 4ef058e4-d941-447d-
a427-438440b7d306+766f7158-b394-4cc1-891b-4667df0822fa.
Procedure
Procedure
1 Ensure that the volume snapshot that you want to restore is available in the current
Kubernetes cluster.
Using the information captured in Prometheus, you can build Grafana dashboards that help you
analyse and understand the health and behavior of vSphere Container Storage Plug-in.
In the controller pod of vSphere Container Storage Plug-in, the following two containers expose
metrics:
The container provides communication from the Kubernetes Cluster API server to the CNS
component on vCenter Server for volume lifecycle operations.
The container sends metadata information about persistent volumes to the CNS component
on vCenter Server, so that it can be displayed in the vSphere Client in the Container Volumes
view.
Action Command
following: vsphere_cns_volume_ops_hi
n create-volume stogram_sum{optype="attac
h-volume",status="pass"}
n delete-volume
6.611152518
n attach-volume
vsphere_cns_volume_ops_hi
n detach-volume
stogram_count{optype="att
n update-volume-metadata
ach-
n expand-volume
volume",status="pass"} 3
n query-volume
n query-all-volume
n query-volume-info
n relocate-volume
n configure-volume-acl
n query-snapshots
n create-snapshot
n delete-snapshot
The value of the status field can be
pass or fail.
n attach-volume vsphere_csi_volume_ops_hi
stogram_count{optype="cre
n detach-volume
ate-
n expand-volume
volume",status="pass",vol
n create-snapshot
type="block"} 3
n delete-snapshot
n list-snapshot
The value of the status field can be
pass or fail.
vsphere_full_sync_ops_his
togram_count{status="pass
"} 73
Procedure
% cd kube-prometheus
% ls
CHANGELOG.md experimental
CONTRIBUTING.md go.mod
DCO go.sum
LICENSE jsonnet
Makefile jsonnetfile.json
README.md jsonnetfile.lock.json
RELEASE.md kubescape-exceptions.json
build.sh kustomization.yaml
code-of-conduct.md manifests
developer-workspace scripts
docs sync-to-internal-registry.jsonnet
example.jsonnet tests
examples
%
poddisruptionbudget.policy/alertmanager-main created
prometheusrule.monitoring.coreos.com/alertmanager-main-rules created
secret/alertmanager-main created
service/alertmanager-main created
serviceaccount/alertmanager-main created
servicemonitor.monitoring.coreos.com/alertmanager-main created
clusterrole.rbac.authorization.k8s.io/blackbox-exporter created
clusterrolebinding.rbac.authorization.k8s.io/blackbox-exporter created
configmap/blackbox-exporter-configuration created
deployment.apps/blackbox-exporter created
service/blackbox-exporter created
serviceaccount/blackbox-exporter created
servicemonitor.monitoring.coreos.com/blackbox-exporter created
secret/grafana-config created
secret/grafana-datasources created
configmap/grafana-dashboard-alertmanager-overview created
configmap/grafana-dashboard-apiserver created
configmap/grafana-dashboard-cluster-total created
configmap/grafana-dashboard-controller-manager created
configmap/grafana-dashboard-grafana-overview created
configmap/grafana-dashboard-k8s-resources-cluster created
configmap/grafana-dashboard-k8s-resources-namespace created
configmap/grafana-dashboard-k8s-resources-node created
configmap/grafana-dashboard-k8s-resources-pod created
configmap/grafana-dashboard-k8s-resources-workload created
configmap/grafana-dashboard-k8s-resources-workloads-namespace created
configmap/grafana-dashboard-kubelet created
configmap/grafana-dashboard-namespace-by-pod created
configmap/grafana-dashboard-namespace-by-workload created
configmap/grafana-dashboard-node-cluster-rsrc-use created
configmap/grafana-dashboard-node-rsrc-use created
configmap/grafana-dashboard-nodes created
configmap/grafana-dashboard-persistentvolumesusage created
configmap/grafana-dashboard-pod-total created
configmap/grafana-dashboard-prometheus-remote-write created
configmap/grafana-dashboard-prometheus created
configmap/grafana-dashboard-proxy created
configmap/grafana-dashboard-scheduler created
configmap/grafana-dashboard-workload-total created
configmap/grafana-dashboards created
deployment.apps/grafana created
prometheusrule.monitoring.coreos.com/grafana-rules created
service/grafana created
serviceaccount/grafana created
servicemonitor.monitoring.coreos.com/grafana created
prometheusrule.monitoring.coreos.com/kube-prometheus-rules created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
prometheusrule.monitoring.coreos.com/kube-state-metrics-rules created
service/kube-state-metrics created
serviceaccount/kube-state-metrics created
servicemonitor.monitoring.coreos.com/kube-state-metrics created
prometheusrule.monitoring.coreos.com/kubernetes-monitoring-rules created
servicemonitor.monitoring.coreos.com/kube-apiserver created
servicemonitor.monitoring.coreos.com/coredns created
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created
clusterrole.rbac.authorization.k8s.io/node-exporter created
clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
daemonset.apps/node-exporter created
prometheusrule.monitoring.coreos.com/node-exporter-rules created
service/node-exporter created
serviceaccount/node-exporter created
servicemonitor.monitoring.coreos.com/node-exporter created
clusterrole.rbac.authorization.k8s.io/prometheus-k8s created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created
poddisruptionbudget.policy/prometheus-k8s created
prometheus.monitoring.coreos.com/k8s created
prometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s-config created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
service/prometheus-k8s created
serviceaccount/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus-k8s created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
clusterrole.rbac.authorization.k8s.io/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator
created
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created
configmap/adapter-config created
deployment.apps/prometheus-adapter created
poddisruptionbudget.policy/prometheus-adapter created
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created
service/prometheus-adapter created
serviceaccount/prometheus-adapter created
servicemonitor.monitoring.coreos.com/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
prometheusrule.monitoring.coreos.com/prometheus-operator-rules created
service/prometheus-operator created
serviceaccount/prometheus-operator created
servicemonitor.monitoring.coreos.com/prometheus-operator created
When deployed through kube-prometheus, the ClusterRole prometheus-k8s does not have
the necessary apiGroup resources and verbs rules to pick up metrics of vSphere Container
Storage Plug-in. You must modify the ClusterRole with the necessary rules.
a Display the ClusterRole after it was first created.
% cat prometheus-clusterRole-updated.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: prometheus
app.kubernetes.io/instance: k8s
app.kubernetes.io/name: prometheus
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 2.33.0
name: prometheus-k8s
rules:
- apiGroups:
- ""
resources:
- nodes
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- nonResourceURLs:
- /metrics
verbs:
- get
% kubectl apply -f prometheus-clusterRole-updated.yaml
clusterrole.rbac.authorization.k8s.io/prometheus-k8s configured
- nonResourceURLs:
- /metrics
verbs:
- get
You must create a ServiceMonitor object to monitor any service, such as vSphere Container
Storage Plug-in, through Prometheus.
a Create the manifest and deploy the ServiceMonitor object.
The object will be used to monitor the vsphere-csi-controller service. The endpoints
refer to ports 2112 (ctlr) and 2113 (syncer).
% cat vsphere-csi-controller-service-monitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: vsphere-csi-controller-prometheus-servicemonitor
namespace: monitoring
labels:
name: vsphere-csi-controller-prometheus-servicemonitor
spec:
selector:
matchLabels:
app: vsphere-csi-controller
namespaceSelector:
matchNames:
- vmware-system-csi
endpoints:
- port: ctlr
- port: syncer
The logs list any potential issues with scraping metrics from vSphere Container Storage
Plug-in. For example, if you have not correctly updated the ClusterRole, you can observe
errors similar to this:
Launch Prometheus UI
Access Prometheus UI and view the vSphere Container Storage Plug-in metrics that Prometheus
collects.
Procedure
You can use various methods to address this, such as change the service type to NodePort
or LoadBalancer if you have one available to provide LoadBalancer IPs.
For the purposes of this testing, port-forward the service and port (9090), and make it
accessible from local host.
Procedure
Use the port-forward functionality to access Grafana from a browser on the local host. This
time the port is 3000.
3 In Grafana UI, set up the dashboard for vSphere Container Storage Plug-in.
The dashboard similar to the following displays the vSphere Container Storage Plug-in
metrics that have been scraped and stored by Prometheus.
Procedure
The alert manager is normally deployed in addition to other services when you deploy kube-
prometheus.
labels:
issue: Success rate of CSI volume OP "create-volume" is lower than 95% in last 6
hours.
severity: warning
Procedure
u Collect logs for vSphere Container Storage Plug-in pods by using the following command.
Option Description
-c container-name The name of the container in the vSphere Container Storage Plug-in pod.
-n namespace The location where you deploy vSphere Container Storage Plug-in. Default is
vmware-system-csi.
To collect vSphere CSI Node Daemonset Pod logs, use the following command:
Note In the production environment, vSphere Container Storage Plug-in runs with multiple
replicas. It is recommended to collect logs from each replica of the pod. It is also important to
collect logs of all containers from all replicas of the vSphere Container Storage Plug-in pod to
perform root cause analysis in case you find any issues.