You are on page 1of 150

Getting Started with VMware vSphere

Container Storage Plug-in

VMware vSphere Container Storage Plug-in 3.0


Getting Started with VMware vSphere Container Storage Plug-in

You can find the most up-to-date technical documentation on the VMware by Broadcom website at:

https://docs.vmware.com/

VMware by Broadcom
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com

©
Copyright 2021-2024 Broadcom. All Rights Reserved. The term “Broadcom” refers to Broadcom Inc.
and/or its subsidiaries. For more information, go to https://www.broadcom.com. All trademarks, trade
names, service marks, and logos referenced herein belong to their respective companies. Copyright and
trademark information.

VMware by Broadcom 2
Contents

Getting Started with VMware vSphere Container Storage Plug-in 6

Updated Information 7

1 vSphere Container Storage Plug-in Concepts 8


Components of the vSphere Container Storage Plug-in 10
Compatibility Matrices for vSphere Container Storage Plug-in 11
Supported Kubernetes Functionality 14
Configuration Maximums for vSphere Container Storage Plug-in 17
vSphere Functionality Supported by vSphere Container Storage Plug-in 18

2 vSphere Container Storage Plug-in Deployment 21


Preparing for Installation of vSphere Container Storage Plug-in 21
vCenter Server and ESXi Version Requirements for vSphere Container Storage Plug-in 21
vSphere Roles and Privileges 22
Management Network for vSphere Container Storage Plug-in 25
Configure Kubernetes Cluster VMs 25
Install vSphere Cloud Provider Interface 27
Configure CoreDNS for vSAN File Share Volumes 29
Deploying the vSphere Container Storage Plug-in on a Native Kubernetes Cluster 30
Create vmware-system-csi Namespace for vSphere Container Storage Plug-in 31
Taint Kubernetes Control Plane Node for the vSphere Container Storage Plug-in Installation
31
Create a Kubernetes Secret for vSphere Container Storage Plug-in 32
Use a Secure Connection for vSphere Container Storage Plug-in 36
Use a Secure Connection in the Environment with Multiple vCenter Server Instances 38
Automatic Generation of Cluster IDs in vSphere Container Storage Plug-in 40
Install the vSphere Container Storage Plug-in 41
Deploying vSphere Container Storage Plug-in with Multiple vCenter Server Instances 43
Install vSphere Cloud Provider Interface in an Environment with Multiple vCenter Server
Instances 45
Deploy vSphere Container Storage Plug-in with Multiple vCenter Server Instances 46
Deploying vSphere Container Storage Plug-in with Topology 48
Deploy vSphere Container Storage Plug-in with Topology 50
Upgrading vSphere Container Storage Plug-in 54
Remount ReadWriteMany Volumes Backed by vSAN File Service 55
Upgrade vSphere Container Storage Plug-in of a Version Earlier than 2.3.0 57
Upgrade vSphere Container Storage Plug-in of a Version 2.3.0 or Later 58

VMware by Broadcom 3
Getting Started with VMware vSphere Container Storage Plug-in

Enable Volume Snapshot and Restore After an Upgrade to Version 2.5.x or Later 59
Migrating In-Tree vSphere Volumes to vSphere Container Storage Plug-in 59
Enable Migration of In-Tree vSphere Volumes to vSphere Container Storage Plug-in 63
Deploying vSphere Container Storage Plug-in on Windows 68
Enable vSphere Container Storage Plug-in with Windows Nodes 69

3 Using vSphere Container Storage Plug-in 73


Provisioning Block Volumes with vSphere Container Storage Plug-in 73
Dynamically Provision a Block Volume with vSphere Container Storage Plug-in 73
Node VM Placement and Datastore Selection for Volume Provisioning 77
Statically Provision a Block Volume with vSphere Container Storage Plug-in 77
Use XFS File System with vSphere Container Storage Plug-in 80
Using Raw Block Volumes with vSphere Container Storage Plug-in 85
Create a Raw Block PVC 86
Use a Raw Block PVC 86
Provisioning File Volumes with vSphere Container Storage Plug-in 87
Dynamically Provision File Volumes with vSphere Container Storage Plug-in 89
Statically Provision File Volumes with vSphere Container Storage Plug-in 92
Topology-Aware Volume Provisioning 93
Deploy Workloads with Immediate Mode in a Topology-Aware Environment 93
Deploy Workloads with Immediate Mode in a Topology-Aware Environment for File
Volumes 93
Deploy Workloads with Immediate Mode in a Topology-Aware Environment for Block
Volumes 96
Deploy Workloads with WaitForFirstConsumer Mode in a Topology-Aware Environment
99
Deploy Workloads with WaitForFirstConsumer Mode in a Topology-Aware Environment
for File Volumes 99
Deploy Workloads with WaitForFirstConsumer Mode in a Topology-Aware Environment
for Block Volumes 101
Deploy Workloads on a Preferential Datastore in a Topology-Aware Environment 104
Provisioning Volumes in an Environment with Multiple vCenter Server Instances 105
Expanding a Volume with vSphere Container Storage Plug-in 109
Expand a Volume in Online Mode 109
Expand a Volume in Offline Mode 112
Deploy Kubernetes and Persistent Volumes on a vSAN Stretched Cluster 115
Deployment 1 119
Deployment 2 120
Deployment 3 122
Upgrade Kubernetes and Persistent Volumes on vSAN Stretched Clusters 123
Using vSphere Container Storage Plug-in for HCI Mesh Deployment 123
Volume Snapshot and Restore 125
Collecting Metrics with Prometheus to Monitor vSphere Container Storage Plug-in 134

VMware by Broadcom 4
Getting Started with VMware vSphere Container Storage Plug-in

Deploy Prometheus and Build Grafana Dashboards 136


Deploy Prometheus Monitoring Stack 136
Launch Prometheus UI 146
Create Grafana Dashboard 147
Set Up a Prometheus Alert 148
Collect vSphere Container Storage Plug-in Logs 150

VMware by Broadcom 5
Getting Started with VMware vSphere
Container Storage Plug-in

The Getting Started with VMware vSphere Container Storage Plug-in documentation provides
®
information about setting up and using VMware vSphere Container Storage™ Plug-in. vSphere
Container Storage Plug-in, also called the upstream vSphere CSI driver, is a volume plug-in that
runs in a native Kubernetes cluster deployed in vSphere and is responsible for provisioning
persistent volumes on vSphere storage.

At VMware, we value inclusion. To foster this principle within our customer, partner, and internal
community, we create content using inclusive language.

Intended Audience
This information is intended for developers and vSphere administrators who have a basic
understanding of Kubernetes and are familiar with container deployment concepts.

VMware by Broadcom 6
Updated Information

This Getting Started with VMware vSphere Container Storage Plug-in is updated with each
release of the product or when necessary.

This table provides the update history of the Getting Started with VMware vSphere Container
Storage Plug-in.

Revision Description

11 DEC 2023 Added a statement about limitations of RWX volumes backed by vSAN File Service. See Provisioning File
Volumes with vSphere Container Storage Plug-in.

08 DEC 2023 Updated information about the vSphere version supported for HCI Mesh deployment in vSphere
Functionality Supported by vSphere Container Storage Plug-in.

21 NOV 2023 Updated information about Storage vMotion in vSphere Functionality Supported by vSphere Container
Storage Plug-in.

19 SEP 2023 Added information about vSphere versions recommended in Migrating In-Tree vSphere Volumes to
vSphere Container Storage Plug-in.

15 SEP 2023 n Updated information about setting up vSAN stretched cluster in Deploy Kubernetes and Persistent
Volumes on a vSAN Stretched Cluster.
n Added a new section about HCI Mesh deployment in Using vSphere Container Storage Plug-in for HCI
Mesh Deployment.

13 JUL 2023 n Updated the prerequisites in Deploy vSphere Container Storage Plug-in with Topology.
n Added information about patch releases to Kubernetes Versions Compatible with vSphere Container
Storage Plug-in.

10 JUL 2023 Minor revisions.

20 JUN 2023 Corrected the name of the VM storage policies privilege. See vSphere Roles and Privileges.

30 MAY 2023 Minor revisions.

12 MAY 2023 Updated the prerequisites in Use a Secure Connection for vSphere Container Storage Plug-in and Use a
Secure Connection in the Environment with Multiple vCenter Server Instances.

27 APR 2023 Updated the PVC and PV example in Statically Provision a Block Volume with vSphere Container Storage
Plug-in.

24 APR 2023 Updated the topology example in Deploying vSphere Container Storage Plug-in with Multiple vCenter
Server Instances.

18 APR 2023 Updated support statement for thick provisioning on VMFS datastores. See vSphere Functionality
Supported by vSphere Container Storage Plug-in.

13 APR 2023 Changed maximum supported Kubernetes version from 1.26 to 1.27. See Compatibility Matrices for
vSphere Container Storage Plug-in.

17 MAR 2023 Initial release.

VMware by Broadcom 7
vSphere Container Storage Plug-
in Concepts 1
Cloud Native Storage (CNS) integrates vSphere and Kubernetes and offers capabilities to create
and manage container volumes in vSphere environment. CNS consists of the two components,
CNS component in vCenter Server and a vSphere volume driver in Kubernetes, called vSphere
Container Storage Plug-in.

The main goal of CNS is to enable vSphere and vSphere storage, including vSAN, as a platform
to run stateful Kubernetes workloads. vSphere offers a highly reliable and performant data path,
mature for enterprise use. CNS enables access of this data path to Kubernetes and brings an
understanding of Kubernetes volume and pod abstractions to vSphere.

Cloud Native Storage Architecture


The following illustration demonstrates how CNS components, CNS in vCenter Server and
vSphere Container Storage Plug-in, interact with other components in vSphere environment.

VMware by Broadcom 8
Getting Started with VMware vSphere Container Storage Plug-in

Devops

Kubernetes Cluster
VI Admin
vSphere Container
Storage Plug-in vSphere Client

vSphere

vCenter Server

CNS

Cache DB

FCD SPBM vSAN FS

ESXi ESXi ESXi ESXi

Block File Block Block Block


Volume 1 Volume 1 Volume 2 Volume 3 Volume 4

vSAN VMFS NFS vVols

VMware by Broadcom 9
Getting Started with VMware vSphere Container Storage Plug-in

CNS Component in vCenter Server


In the vSphere environment, the CNS control plane introduces a concept of volumes, such as
container volumes and persistent volumes. CNS in vCenter Server is a storage control plane
for container volumes. CNS is responsible for managing the life cycle of volumes, including
operations such as create, read, update, and delete. It is also responsible for managing volume
metadata, snapshot and restore, volume copy and clone, as well as monitoring the health and
compliance of volumes. These volumes are independent of the virtual machine life cycle and have
their own identity in vSphere.

CNS leverages the existing Storage Policy Based Management (SPBM) functionality for volume
provisioning. The DevOps users can use the storage policies, created by the vSphere
administrator in vSphere, to specify the storage SLAs for the application volumes within
Kubernetes. CNS enables the DevOps users to self-provision storage for their apps with
appropriate storage SLAs. CNS honors these storage SLAs by provisioning the volume on an
SPBM policy-compliant datastore in vSphere. The SPBM policy is applied at the granularity of a
container volume.

CNS supports block volumes backed by First Class Disk (FCD) and file volumes backed by vSAN
file shares. A block volume can only be attached to one Kubernetes pod with ReadWriteOnce
access mode at any point in time. A file volume can be attached to one or more pods with
ReadWriteMany/ReadOnlyMany access modes.

vSphere Container Storage Plug-in in Kubernetes


In Kubernetes, CNS provides a volume driver that consist of two sub components.

n CSI Plug-in: The CSI plug-in is responsible for volume provisioning, attaching and detaching
the volume to VMs, mounting, formatting, and unmounting volumes from the pod within the
node VM, and so on. It is built as an out-of-tree CSI plug-in for Kubernetes.

n Syncer: The syncer is responsible for pushing the PV, PVC, and pod metadata to CNS. It also
offers a CNS operator that is used in vSphere with Tanzu. For information, see the vSphere
with Tanzu documentation.

Read the following topics next:

n Components of the vSphere Container Storage Plug-in

n Compatibility Matrices for vSphere Container Storage Plug-in

n Configuration Maximums for vSphere Container Storage Plug-in

n vSphere Functionality Supported by vSphere Container Storage Plug-in

Components of the vSphere Container Storage Plug-in


The vSphere Container Storage Plug-in contains different components.

vSphere Container Storage Plug-in Controller

VMware by Broadcom 10
Getting Started with VMware vSphere Container Storage Plug-in

The vSphere Container Storage Plug-in controller provides an interface used by the Container
Orchestrators to manage the life cycle of vSphere volumes. It also allows you to create,
expand and delete volumes, attach and detach volumes to Node VMs.

vSphere Container Storage Plug-in Node

The vSphere Container Storage Plug-in node allows you to format and mount the volumes to
nodes, and use bind mounts for the volumes inside the pod. Before the volume is detached,
the vSphere Container Storage Plug-in node helps to unmount the volume from the node.
The vSphere Container Storage Plug-in node runs as a daemonset inside the cluster.

Syncer

The Metadata Syncer is responsible for pushing PV, PVC, and pod metadata to CNS. The
data appears under the CNS dashboard in the vSphere Client. The data assists the vSphere
administrators to determine which Kubernetes clusters, apps, pods, PVCs, and PVs are using
the volume.

Full synchronization is responsible for keeping the CNS up to date with Kubernetes volume
metadata information such as PVs, PVCs, pods, and so on. The full synchronization is helpful
in the following cases:

n CNS goes down.

n vSphere Container Storage Plug-in pod goes down.

n API server goes down or Kubernetes core services goes down.

n vCenter Server is restored to a backup point.

n etcd is restored to a backup point.

Compatibility Matrices for vSphere Container Storage Plug-


in
Each version of vSphere Container Storage Plug-in must be compatible with an appropriate
vSphere version. In addition, vSphere Container Storage Plug-in has the minimum and maximum
Kubernetes version requirement.

vSphere Versions Compatible with vSphere Container Storage Plug-


in
As a general rule, when you upgrade your vSphere environment, upgrade vSphere Container
Storage Plug-in to a minimum recommended version.

In addition, availability of specific Kubernetes functionality that vSphere Container Storage Plug-
in supports might require a combination of specific vSphere and vSphere Container Storage
Plug-in versions. Make sure that you follow these requirements. See Supported Kubernetes
Functionality.

VMware by Broadcom 11
Getting Started with VMware vSphere Container Storage Plug-in

Minimum recommended version of vSphere Container


vSphere version Storage Plug-in

8.0 Update 2 3.0.0

8.0 Update 1 2.7.1

8.0 2.7

7.0 Update 3 2.4

7.0 Uupdate 2 2.2

7.0 P05 2.6

7.0 P04 2.5

7.0 P03 2.3

When you use vSphere Container Storage Plug-in with vSphere, the following considerations
apply:

n Make sure that your vCenter Server and ESXi versions match. If you have a newer vCenter
Server version, but older ESXi hosts, new features added in the latest vCenter Server do not
work until you upgrade all ESXi hosts to the newer version.

n For bug fixes and performance improvements, you can deploy the latest patch version
of vSphere Container Storage Plug-in without upgrading vSphere. The driver is backward
compatible with older vSphere releases.

Kubernetes Versions Compatible with vSphere Container Storage


Plug-in
VMware supports vSphere Container Storage Plug-in versions until they reach their End Of Life
(EOL) dates.

Note To take advantage of critical bug fixes, make sure to upgrade to the latest patch release
available for each minor version of vSphere Container Storage Plug-in. For more information
on specific bug fixes, see Release Notes on the VMware vSphere Container Storage Plug-in
Documentation page.

vSphere Container Storage Minimum Kubernetes Maximum Kubernetes


Plug-in Release Release EOL Date

3.2.0 1.27 1.29 March 2026

3.1.2 1.26 1.28 September 2025

3.1.1 1.26 1.28 September 2025

3.1.0 1.26 1.28 September 2025

3.0.3 1.24 1.27 March 2025

3.0.2 1.24 1.27 March 2025

3.0.1 1.24 1.27 March 2025

VMware by Broadcom 12
Getting Started with VMware vSphere Container Storage Plug-in

vSphere Container Storage Minimum Kubernetes Maximum Kubernetes


Plug-in Release Release EOL Date

3.0.0 1.24 1.27 March 2025

2.7.3 1.23 1.25 October 2024

2.7.2 1.23 1.25 October 2024

2.7.1 1.23 1.25 October 2024

2.7.0 1.23 1.25 October 2024

2.6.4 1.22 1.24 July 2024

2.6.3 1.22 1.24 July 2024

2.6.2 1.22 1.24 July 2024

2.6.1 1.22 1.24 July 2024

2.6.0 1.22 1.24 July 2024

2.5.4 1.21 1.23 February 2024

2.5.3 1.21 1.23 February 2024

2.5.2 1.21 1.23 February 2024

2.5.1 1.21 1.23 February 2024

2.5.0 1.21 1.23 February 2024

2.4.3 1.20 1.22 November 2023

2.4.2 1.20 1.22 November 2023

2.4.1 1.20 1.22 November 2023

2.4.0 1.20 1.22 November 2023

2.3.2 1.19 1.21 August 2023

2.3.1 1.19 1.21 August 2023

2.3.0 1.19 1.21 August 2023

2.2.4 1.18 1.20 April 2023

2.2.3 1.18 1.20 April 2023

2.2.2 1.18 1.20 April 2023

2.2.1 1.18 1.20 April 2023

2.2.0 1.18 1.20 April 2023

2.1.2 1.17 1.19 August 2022

VMware by Broadcom 13
Getting Started with VMware vSphere Container Storage Plug-in

vSphere Container Storage Minimum Kubernetes Maximum Kubernetes


Plug-in Release Release EOL Date

2.1.1 1.17 1.19 August 2022

2.1.0 1.17 1.19 August 2022

2.0.2 1.17 1.19 January 2022

2.0.1 1.17 1.19 January 2022

2.0.0 1.16 1.18 January 2022

Supported Kubernetes Functionality


Level of support that vSphere Container Storage Plug-in provides to Kubernetes features
depends on the vSphere version.

VMware fully supports features listed as GA.

In addition, VMware provides support to GA (Kubernetes Beta) features that have been declared
as GA with vSphere Container Storage Plug-in, but are still at a Beta stage with Kubernetes. Note
that because feature details might change after they transition to the GA status with Kubernetes,
you might need to perform additional configuration steps during the vSphere Container Storage
Plug-in upgrade. For information about Kubernetes feature stages, see https://kubernetes.io/
docs/reference/command-line-tools-reference/feature-gates/#using-a-feature.

Some features might be supported only at Alpha or Beta level. Alpha and Beta features do not
receive sufficient testing and are not recommended for production use. VMware Support team
does not support issues reported for these features. Upgrades from Alpha to Beta and from
Beta to GA are not supported, because each subsequent release might introduce incompatible
changes.

Alpha and Beta features are not documented in the Getting Started with VMware vSphere
Container Storage Plug-in documentation. For information about Alpha and Beta features, see
https://github.com/kubernetes-sigs/vsphere-csi-driver/tree/master/docs/book/features.

VMware by Broadcom 14
Getting Started with VMware vSphere Container Storage Plug-in

Minimum
Required
vSphere
Container
Storage
Support Plug-in
Feature Status Version vSphere 7.0 and Later vSphere 6.7 Update 3

Deploy Workloads on a GA 2.6.1 Yes Yes


Preferential Datastore in a
Topology-Aware Environment

Topology-Aware Volume GA 2.4.1 Yes Yes


Provisioning (ReadWriteOnce The following
access) requirements must be
met:
n Minimum vSphere
version is 6.7 P06*.
n Minimum vSphere
Container Storage
Plug-in version is
2.4.1.

Topology-Aware Volume GA 3.2.0 Yes No


Provisioning (ReadWriteMany Minimum vSphere version
access) is 7.0 Update 3.

WaitForFirstConsumer binding GA 2.4.0 Yes Yes


mode

vsphere-csi-controller multi GA 2.4.0 Yes Yes


replica feature

Thick volume provisioning GA 3.0 Minimum vSphere version No


is 8.0 Update 1.

Enable vSphere Container GA 3.0 Yes No


Storage Plug-in with Windows
Nodes

Enhanced object health in GA 2.0.0 Yes Yes


vSphere Client for vSAN
datastores

Dynamic block PV support GA 2.0.0 Yes Yes


(ReadWriteOnce access)

Dynamic Virtual Volume (vVols) GA 2.0.0 Yes Yes


PV support

Static PV provisioning GA 2.0.0 Yes Yes

Kubernetes multi-node control GA 2.0.0 Yes Yes


plane support

VMware by Broadcom 15
Getting Started with VMware vSphere Container Storage Plug-in

Minimum
Required
vSphere
Container
Storage
Support Plug-in
Feature Status Version vSphere 7.0 and Later vSphere 6.7 Update 3

Offline volume expansion (block GA n 2.0.0 Yes Yes


volume only) (Kubernete with Minimum vSphere The following
s Beta. minimum Container Storage Plug-in requirements must be
Minimum vSphere version is 2.0.0. met:
required 7.0 n Minimum vSphere
Kubernetes n 2.4.1 with version is 6.7
version is minimum Update 3 P06*.
1.16.) vSphere n Minimum vSphere
6.7 Container Storage
Update Plug-in version is
3 P06 2.4.1.

Encryption support via VMcrypt GA 2.0.0 Yes No


(block volume only)

Dynamic file PV support GA 2.0.0 Yes No


(ReadWriteMany access mode)
through vSAN File Services on
vSAN datastores

In-tree vSphere volume GA 2.2.4 Yes Yes


migration to CSI (Kubernete Minimum required For versions 2.3.2 and
s Beta) vSphere version is 7.0 later, minimum required
Update 2. vSphere version is 6.7
Update 3 P06*.

Online volume expansion GA 2.2.0 Yes No


support (block volume only) (Kubernete Minimum required
s Beta. vSphere version is 7.0
Minimum Update 2.
required
Kubernetes
version is
1.16.)

Use XFS File System with GA 3.0 Yes Yes


vSphere Container Storage Plug-
in

Using Raw Block Volumes with GA 3.0 Yes Yes


vSphere Container Storage Plug-
in

Volume snapshot support GA 2.5.0 Yes No


(ReadWriteOnce access) Minimum required
vSphere version is 7.0
Update 3

VMware by Broadcom 16
Getting Started with VMware vSphere Container Storage Plug-in

Minimum
Required
vSphere
Container
Storage
Support Plug-in
Feature Status Version vSphere 7.0 and Later vSphere 6.7 Update 3

Deploy Kubernetes and GA 2.4.0 Yes No


Persistent Volumes on a Minimum required
vSAN Stretched Cluster vSphere version is 7.0
(ReadWriteOnce access) Update 3d

Deploy Kubernetes and GA 2.7.0 Yes No


Persistent Volumes on a Minimum required
vSAN Stretched Cluster vSphere version is 7.0
(ReadWriteMany access) Update 3

Deploying vSphere Container GA 3.0.0 Yes Yes


Storage Plug-in with Multiple
vCenter Server Instances

* For information about vSphere 6.7 Update 3 P06 release, see:

n VMware ESXi 6.7, Patch Release ESXi670-202111001

n VMware vCenter Server 6.7 Update 3p Release Notes

Upgrading vSphere Container Storage Plug-in


n You can upgrade vSphere Container Storage Plug-in from any lower version to a higher
version.

n vSphere Container Storage Plug-in is backward and forward compatible to vSphere releases.

n Features added in the latest vSphere releases do not work on the older vSphere Container
Storage Plug-in.

n For more information about upgrading vSphere Container Storage Plug-in, see Upgrading
vSphere Container Storage Plug-in.

Configuration Maximums for vSphere Container Storage


Plug-in
This topic provides the recommended configuration limits for vSphere Container Storage Plug-in.
When you use vSphere Container Storage Plug-in in your environment, stay within the supported
and recommended limits.

VMware by Broadcom 17
Getting Started with VMware vSphere Container Storage Plug-in

Limits Single Access Volume Multi Access Volume

Number of volumes n 10000 volumes per vCenter Server for 100 file shares per vSAN cluster
vSAN, NFS 3, and VMFS datastores 100 concurrent clients for RWM PVs
n 840 volumes per vCenter Server for Virtual
Volumes datastores

Number of RWO PVs, Maximum 59 N/A


backed by virtual disks, per a With four Paravirtual SCSI controllers on VM
VM with four controllers with one slot used for the primary disk of node
VM.

Multiple instances of vSphere replica = 3 replica = 3


Container Storage Plug-in
pods in multi-node control
plane environment

Note
n For higher availability, run vSphere CSI Controller with minimum of three replicas. If your
Kubernetes cluster has more than three control plane nodes, set the CSI Controller replicas to
match the number of control plane nodes on the cluster.

n If your development or test Kubernetes cluster does not contain multiple control plane nodes,
set the replica to one.

n Limits for Single Access Volume are applicable to both single access file system volumes and
single access block volumes.

n vSphere Container Storage Plug-in uses only Paravirtual SCSI controllers to attach volumes to
node VMs. Each non Paravirtual SCSI controller on the node VM reduces the maximum limit of
RWO PVs per node VM by 15.

vSphere Functionality Supported by vSphere Container


Storage Plug-in
vSphere Container Storage Plug-in supports multiple vSphere features. However, certain
limitations apply.

VMware by Broadcom 18
Getting Started with VMware vSphere Container Storage Plug-in

vSphere Container Storage


Functionality Plug-in Support

vSAN HCI Mesh Yes


vSphere Container Storage
Plug-in supports block
volumes when you use
HCI Mesh on vSphere 7.0
Update 3 or later.
vSphere Container Storage
Plug-in does not support file
volumes when you use HCI
Mesh.

vSphere Storage DRS No

vSAN File Service on Stretched Cluster Yes

vCenter Server High Availability Yes


With vSphere 7.0 Update 3c
and later

ESXi Cluster Migration Between Different vCenter Server Systems No

vMotion Yes

Storage vMotion Yes


Supported directly with
vSphere Client and CNS
manager from vSphere 7.0
Update 3 and vSphere 8.0
Update 2 onwards. For more
information, see the CNS
Manager page on GitHub.

Cross vCenter Server Migration No


Moving workloads across vCenter Server systems and ESXi hosts.

vSAN, including vSAN Express Storage Architecture (ESA), Virtual Volumes, NFS 3, and Yes
VMFS Datastores

NFS 4 Datastore Yes


With vSphere 7.0 Update 2
and later

vSphere Cluster Services (vCLS) Yes


With vSphere 7.0 Update 3c
and later

VM Encryption Yes

Thick Provisioning on Non vSAN Datastores Yes


On VMFS datastore with
vSphere 8.0 Update 1.
For Virtual Volumes, it
depends on capabilities
exposed by third party
storage arrays.

VMware by Broadcom 19
Getting Started with VMware vSphere Container Storage Plug-in

vSphere Container Storage


Functionality Plug-in Support

Thick Provisioning on vSAN Datastores Yes

Deployments with Multiple vCenter Server Instances (limited support) Yes


vSphere Container Storage
Plug-in does not support
datastores shared across
vCenter Server instances.

VMware by Broadcom 20
vSphere Container Storage Plug-
in Deployment 2
You can install the vSphere Container Storage Plug-in on a generic, also called vanilla, Kubernetes
cluster. Before installing the vSphere Container Storage Plug-in, you must follow specific
prerequisites. You can later upgrade the plug-in to a new release.

Installation procedures in this section apply only to generic Kubernetes clusters. Supervisor
clusters and Tanzu Kubernetes clusters in vSphere with Tanzu use the preinstalled vSphere
Container Storage Plug-in.

Read the following topics next:

n Preparing for Installation of vSphere Container Storage Plug-in

n Deploying the vSphere Container Storage Plug-in on a Native Kubernetes Cluster

n Deploying vSphere Container Storage Plug-in with Multiple vCenter Server Instances

n Deploying vSphere Container Storage Plug-in with Topology

n Upgrading vSphere Container Storage Plug-in

n Migrating In-Tree vSphere Volumes to vSphere Container Storage Plug-in

n Deploying vSphere Container Storage Plug-in on Windows

Preparing for Installation of vSphere Container Storage


Plug-in
Before you deploy vSphere Container Storage Plug-in, review the prerequisites to ensure that
you have set up everything you need for the installation.

vCenter Server and ESXi Version Requirements for vSphere


Container Storage Plug-in
Make sure that you use correct vCenter Server and ESXi versions with vSphere Container
Storage Plug-in.

For information, see vSphere Versions Compatible with vSphere Container Storage Plug-in.

VMware by Broadcom 21
Getting Started with VMware vSphere Container Storage Plug-in

vSphere Roles and Privileges


vSphere users for vSphere Container Storage Plug-in require a set of privileges to perform Cloud
Native Storage operations.

To know how to create and assign a role, see vSphere Security Documentation.

You must create the following roles with sets of privileges:

Role Name Privilege Name Description Required On

CNS- Datastore > Low level file operations Allows performing read, Shared datastore where
Datastore write, delete, and persistent volumes reside.
rename operations in the
datastore browser.

CNS-HOST- Host > Configuration > Storage partition Allows vSAN datastore Required on a vSAN cluster
CONFIG- configuration management. with vSAN file service.
STORAGE Required for file volume only.

CNS-VM Virtual machine > Change Configuration > Allows adding an existing All node VMs.
Add existing disk virtual disk to a virtual
machine.

Virtual Machine > Change Configuration > Allows addition or


Add or remove device removal of any non-disk
device.

CNS- CNS > Searchable Allows storage Root vCenter Server.


SEARCH- administrator to see
AND-SPBM Cloud Native Storage UI.

VMware by Broadcom 22
Getting Started with VMware vSphere Container Storage Plug-in

Role Name Privilege Name Description Required On

VM storage policies > View VM storage Allows viewing of


policies defined storage policies.

Read-only Default role Users with the Read Only All hosts where the nodes
role for an object are VMs reside.
allowed to view the state Data center.
of the object and details
about the object. For
example, users with this
role can find the shared
datastore accessible to
all node VMs.
For topology-aware
environments, all
ancestors of node VMs,
such as a host, cluster,
folder, and data center,
must have the Read-
only role set for the
vSphere user configured
to use vSphere Container
Storage Plug-in. This is
required to allow reading
tags and categories
to prepare the nodes'
topology.

You need to assign roles to the vSphere objects participating in the Cloud Native Storage
environment. Make sure to apply roles when a new entity, such as node VM or a datastore,
is added in the vCenter Server inventory for the Kubernetes cluster.

The following sample vSphere inventory provides more information about roles assignment in
vSphere objects.

sc2-rdops-vm06-dhcp-215-129.eng.vmware.com (vCenter Server)


|
|- datacenter (Data Center)
|
|-vSAN-cluster (cluster)
|
|-10.192.209.1 (ESXi Host)
| |
| |-k8s-control-plane (node-vm)
|
|-10.192.211.250 (ESXi Host)
| |
| |-k8s-node1 (node-vm)
|
|-10.192.217.166 (ESXi Host)
| |
| |-k8s-node2 (node-vm)

VMware by Broadcom 23
Getting Started with VMware vSphere Container Storage Plug-in

| |
|-10.192.218.26 (ESXi Host)
| |
| |-k8s-node3 (node-vm)

As an example, assume that each host has the following shared datastores along with some local
VMFS datastores.

n shared-vmfs.

n shared-nfs.

n vsanDatastore.

Role Usage

ReadOnly

CNS-HOST-CONFIG-STORAGE

CNS-DATASTORE

VMware by Broadcom 24
Getting Started with VMware vSphere Container Storage Plug-in

Role Usage

CNS-VM

CNS-SEARCH-AND-SPBM

Management Network for vSphere Container Storage Plug-in


By default, the vSphere Cloud Provider Interface and vSphere Container Storage Plug-in pods
are scheduled on Kubernetes control plane nodes. Kubernetes Control plane nodes should have
access to the management network to communicate with the vCenter Server. Worker nodes in
the Kubernetes cluster do not require access to the management network to access the vCenter
Server.

For more information on providing vCenter Server credentials access to Kubernetes nodes, see
Deploy vSphere Container Storage Plug-in with Topology.

Configure Kubernetes Cluster VMs


On each node VM that participates in the Kubernetes cluster with vSphere Container Storage
Plug-in, you must enable the disk.EnableUUID parameter and perform other configuration
steps.

Configure all VMs that form the Kubernetes cluster with vSphere Container Storage Plug-in. You
can configure the VMs using the vSphere Client or the govc command-line tool.

Prerequisites

n Create several VMs for your Kubernetes cluster.

n On each node VM, install VMware Tools. For more information about installation, see
Installing and upgrading VMware Tools in vSphere.

n Required privilege: Virtual machine. Configuration. Settings.

VMware by Broadcom 25
Getting Started with VMware vSphere Container Storage Plug-in

Procedure

1 Enable the disk.EnableUUID parameter using the vSphere Client.

a In the vSphere Client, right-click the VM and select Edit Settings.

b Click the VM Options tab and expand the Advanced menu.

c Click Edit Configuration next to Configuration Parameters.

d Configure the disk.EnableUUID parameter.

If the parameter exists, make sure that its value is set to True. If the parameter is not
present, add it and set its value to True.

Name Value

disk.EnableUUID True

2 Upgrade the VM hardware version to 15 or higher.

a In the vSphere Client, navigate to the virtual machine.

b Select Actions > Compatibility > Upgrade VM Compatibility.

c Click Yes to confirm the upgrade.

d Select a compatibility and click OK.

3 Add VMware Paravirtual SCSI storage controller to the VM.

a In the vSphere Client, right-click the VM and select Edit Settings.

b On the Virtual Hardware tab, click the Add New Device button.

c Select SCSI Controller from the drop-down menu.

d Expand New SCSI controller and from the Change Type menu, select VMware
Paravirtual.

e Click OK.

Example

As an alternative, you can configure the VMs using the govc command-line tool.

1 Install govc on your devbox/workstation.

2 Obtain VM paths.

$ export GOVC_INSECURE=1
$ export GOVC_URL='https://<VC_Admin_User>:<VC_Admin_Passwd>@<VC_IP>'

$ govc ls
/<datacenter-name>/vm
/<datacenter-name>/network
/<datacenter-name>/host
/<datacenter-name>/datastore

VMware by Broadcom 26
Getting Started with VMware vSphere Container Storage Plug-in

// To retrieve all Node VMs


$ govc ls /<datacenter-name>/vm
/<datacenter-name>/vm/<vm-name1>
/<datacenter-name>/vm/<vm-name2>
/<datacenter-name>/vm/<vm-name3>
/<datacenter-name>/vm/<vm-name4>
/<datacenter-name>/vm/<vm-name5>

3 To enable disk.EnableUUID, run the following command:

govc vm.change -vm '/<datacenter-name>/vm/<vm-name1>' -e="disk.enableUUID=1"

4 To upgrade the VM hardware version to 15 or higher, run the following command:

govc vm.upgrade -version=15 -vm '/<datacenter-name>/vm/<vm-name1>'

Install vSphere Cloud Provider Interface


vSphere Container Storage Plug-in requires that you install a Cloud Provider Interface on your
Kubernetes cluster in the vSphere environment. Follow this procedure to install the vSphere
Cloud Provider Interface (CPI).

Prerequisites

Ensure that you have the following permissions before you install a Cloud Provider Interface on
your Kubernetes cluster in the vSphere environment:

n Read permission on the parent entities of the node VMs such as folder, host, datacenter,
datastore folder, and datastore cluster.

If your environment includes multiple vCenter Server instances, see Install vSphere Cloud
Provider Interface in an Environment with Multiple vCenter Server Instances.

Procedure

1 Before you install CPI, verify that all nodes, including the control plane nodes, are tainted with
node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule.

To taint nodes, use the kubectl taint node <node-name>


node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule command.
When the kubelet is started with an external cloud provider, this taint is set on a node to
mark it as unusable. After a controller from the cloud-controller-manager initializes this node,
the kubelet removes this taint.

2 Identify the Kubernetes major version. For example, if the major version is 1.22.x, then run the
following.

VERSION=1.22
wget https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/release-$VERSION/
releases/v$VERSION/vsphere-cloud-controller-manager.yaml

VMware by Broadcom 27
Getting Started with VMware vSphere Container Storage Plug-in

3 Create a vsphere-cloud-config configmap of vSphere configuration.

Note This is used for CPI. There is a separate secret required for vSphere Container Storage
Plug-in.

Modify the vsphere-cloud-controller-manager.yaml file downloaded in step 2 and update


vCenter Server information. For example, see an excerpt of the vsphere-cloud-controller-
manager.yaml file with dummy values as shown below.

apiVersion: v1
kind: Secret
metadata:
name: vsphere-cloud-secret
labels:
vsphere-cpi-infra: secret
component: cloud-controller-manager
namespace: kube-system
# NOTE: this is just an example configuration, update with real values based on your
environment
stringData:
10.185.0.89.username: "Administrator@vsphere.local"
10.185.0.89.password: "Admin!23"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: vsphere-cloud-config
labels:
vsphere-cpi-infra: config
component: cloud-controller-manager
namespace: kube-system
data:
# NOTE: this is just an example configuration, update with real values based on your
environment
vsphere.conf: |
# Global properties in this section will be used for all specified vCenters unless
overriden in VirtualCenter section.
global:
port: 443
# set insecureFlag to true if the vCenter uses a self-signed cert
insecureFlag: true
# settings for using k8s secret
secretName: vsphere-cloud-secret
secretNamespace: kube-system

# vcenter section
vcenter:
my-vc-name:
server: 10.185.0.89
user: Administrator@vsphere.local
password: Admin!23
datacenters:
- VSAN-DC

VMware by Broadcom 28
Getting Started with VMware vSphere Container Storage Plug-in

4 Apply the release manifest with updated values for the config map.

This action creates Roles, Roles Bindings, Service Account, Service and the cloud-controller-
manager pod.

# kubectl apply -f vsphere-cloud-controller-manager.yaml


serviceaccount/cloud-controller-manager created
secret/vsphere-cloud-secret created
configmap/vsphere-cloud-config created
rolebinding.rbac.authorization.k8s.io/servicecatalog.k8s.io:apiserver-authentication-
reader created
clusterrolebinding.rbac.authorization.k8s.io/system:cloud-controller-manager created
clusterrole.rbac.authorization.k8s.io/system:cloud-controller-manager created
daemonset.apps/vsphere-cloud-controller-manager created

5 To remove vsphere.conf file created at /etc/kubernetes/, run the following command.

rm vsphere-cloud-controller-manager.yaml

Note
n You can use the external custom cloud provider CPI with vSphere Container Storage
Plug-in.

Configure CoreDNS for vSAN File Share Volumes


vSphere Container Storage Plug-in requires DNS forwarding configuration in CoreDNS ConfigMap
to help resolve vSAN file share host name.

Procedure

u Modify the CoreDNS ConfigMap and add the conditional forwarder configuration.

kubectl -n kube-system edit configmap coredns

The output of this step is shown below.

.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30

VMware by Broadcom 29
Getting Started with VMware vSphere Container Storage Plug-in

loop
reload
loadbalance
}
vsanfs-sh.prv:53 {
errors
cache 30
forward . 10.161.191.241
}

In this configuration,

n vsanfs-sh.prv is the DNS suffix for vSAN file service.

n 10.161.191.241 is the DNS server that resolves the file share host name.

You can obtain the DNS suffix and DNS IP address from vCenter Server using the following
menu options:

vSphere Cluster > Configure > vSAN > Services > File Service

Deploying the vSphere Container Storage Plug-in on a


Native Kubernetes Cluster
You can follow procedures in this section to install the vSphere Container Storage Plug-in on
a Kubernetes cluster. The installation procedures apply only to generic, also called vanilla,
Kubernetes clusters. Supervisor clusters and Tanzu Kubernetes clusters in vSphere with Tanzu
use the pre-installed vSphere Container Storage Plug-in.

Before installing the vSphere Container Storage Plug-in, ensure that your environment meets all
required prerequisites. For more information, see Preparing for Installation of vSphere Container
Storage Plug-in.

Perform all installation procedures on the same Kubernetes node where you deploy the vSphere
Container Storage Plug-in. VMware recommends that you install the vSphere Container Storage
Plug-in on the Kubernetes control plane node.

What to read next

Procedure

1 Create vmware-system-csi Namespace for vSphere Container Storage Plug-in


Before installing the vSphere Container Storage Plug-in in your generic Kubernetes
environment, create the vmware-system-csi namespace.

2 Taint Kubernetes Control Plane Node for the vSphere Container Storage Plug-in Installation
Before installing the vSphere Container Storage Plug-in in your generic Kubernetes
environment, make sure that you taint the control plane node with the node-
role.kubernetes.io/control-plane=:NoSchedule parameter.

VMware by Broadcom 30
Getting Started with VMware vSphere Container Storage Plug-in

3 Create a Kubernetes Secret for vSphere Container Storage Plug-in


When preparing your native Kubernetes environment for installation of the vSphere
Container Storage Plug-in, create a Kubernetes secret that contains configuration details
to connect to vSphere.

4 Install the vSphere Container Storage Plug-in


Install an appropriate version of the vSphere Container Storage Plug-in in your native
Kubernetes environment. After you install the plug-in, you can verify whether the installation
is successful.

Create vmware-system-csi Namespace for vSphere Container


Storage Plug-in
Before installing the vSphere Container Storage Plug-in in your generic Kubernetes environment,
create the vmware-system-csi namespace.

Procedure

u To create the vmware-system-csi namespace in vSphere Container Storage Plug-in, run the
following command.

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/


v3.0.0/manifests/vanilla/namespace.yaml

Note To be able to take advantage of the latest bug fixes and feature updates, make sure to
use the most recent version of vSphere Container Storage Plug-in. For versions and updates,
see Release Notes on the VMware vSphere Container Storage Plug-in Documentation page.

Taint Kubernetes Control Plane Node for the vSphere Container


Storage Plug-in Installation
Before installing the vSphere Container Storage Plug-in in your generic Kubernetes environment,
make sure that you taint the control plane node with the node-role.kubernetes.io/control-
plane=:NoSchedule parameter.

Procedure

1 To taint the control plane node, run the following command:

kubectl taint nodes <k8s-primary-name> node-role.kubernetes.io/control-plane=:NoSchedule

2 Verify that you have tainted the control plane node.

$ kubectl describe nodes | egrep "Taints:|Name:"


Name: <k8s-primary-name>
Taints: node-role.kubernetes.io/control-plane:NoSchedule
Name: <k8s-worker1-name>
Taints: <none>
Name: <k8s-worker2-name>

VMware by Broadcom 31
Getting Started with VMware vSphere Container Storage Plug-in

Taints: <none>
Name: <k8s-worker3-name>
Taints: <none>
Name: <k8s-worker4-name>
Taints: <none>

Create a Kubernetes Secret for vSphere Container Storage Plug-in


When preparing your native Kubernetes environment for installation of the vSphere Container
Storage Plug-in, create a Kubernetes secret that contains configuration details to connect to
vSphere.

Before installing the vSphere Container Storage Plug-in on a native Kubernetes cluster, create a
configuration file that contains details to connect to vSphere. The default file for the configuration
details is the csi-vsphere.conf file. If you prefer to use a file with another name, change the
environment variable VSPHERE_CSI_CONFIG in the deployment YAMLs. For more information, see
Install the vSphere Container Storage Plug-in.

For information about topology-aware deployments, see Deploy vSphere Container Storage
Plug-in with Topology.

For information about deployments with multiple vCenter Server instances, see Deploying
vSphere Container Storage Plug-in with Multiple vCenter Server Instances.

Procedure

1 Create a vSphere configuration file for block volumes or file volumes.

n Block volumes.

vSphere configuration file for block volumes includes the following sample entries.

$ cat /etc/kubernetes/csi-vsphere.conf
[Global]
cluster-id = "<cluster-id>"
cluster-distribution = "<cluster-distribution>"
ca-file = <ca file path> # optional, use with insecure-flag set to false
thumbprint = "<cert thumbprint>" # optional, use with insecure-flag set to false
without providing ca-file

[VirtualCenter "<IP or FQDN>"]


insecure-flag = "<true or false>"
user = "<username>"
password = "<password>"
port = "<port>"
datacenters = "<datacenter1-path>, <datacenter2-path>, ..."

The entries have the following meanings.

VMware by Broadcom 32
Getting Started with VMware vSphere Container Storage Plug-in

Block Volume Parameter Description

cluster-id n The unique cluster identifier. Each Kubernetes


cluster must contain a unique cluster-id set in the
configuration file. The cluster ID cannot not exceed
64 characters. Use only alphanumeric characters,
period (.), or hyphen (-).
n This parameter is optional from vSphere Container
Storage Plug-in version 3.0 or later. If you do
not enter a cluster ID, vSphere Container Storage
Plug-in internally generates a unique cluster ID.
For more information, see Automatic Generation of
Cluster IDs in vSphere Container Storage Plug-in .

cluster-distribution The distribution of the Kubernetes cluster. This


parameter is optional. Examples are Openshift,
Anthos, and TKGI. When you enter values for this
parameter, keep in mind the following:
n vSphere Container Storage Plug-in controller goes
into CrashLoopBackOff state when you enter
values with special character \r.
n When you enter values exceeding 128 characters,
the PVC creation might be struck in Pending state.

Note This field will be marked as mandatory in


vSphere Container Storage Plug-in version 3.2.0.

VirtualCenter The section defines such parameters as the vCenter


Server IP address and FQDN.

insecure-flag Takes the following values:


n true indicates that you want to use self-signed
certificate for login.
n false indicates that you use secure connection.

For additional steps, see Use a Secure Connection


for vSphere Container Storage Plug-in.

If your environment includes multiple vCenter


Server instances, see Use a Secure Connection
in the Environment with Multiple vCenter Server
Instances.

user The vCenter Server username. You must specify


the username along with the domain name. For
example, user = "userName@domainName" or user =
"domainName\\username". If you don't specify the
domain name for active directory users, the vSphere
Container Storage Plug-in will not function properly.

password Password for a vCenter Server user.

port vCenter Server port. The default is 443.

ca-file The path to a CA certificate in PEM format. This is an


optional parameter.

VMware by Broadcom 33
Getting Started with VMware vSphere Container Storage Plug-in

Block Volume Parameter Description

Thumbprint The certificate thumbprint. It is an optional parameter.


It is ignored when you are using an unsecured setup or
when you provide ca-file.

datacenters List of all comma separated datacenter paths where


Kubernetes node VMs are present. Provide the name
of the datacenter when it is located at the root. When
it is placed in the folder, you need to specify the path
as folder/datacenter-name. The datacenter name
cannot contain a comma since it is used as a delimiter.

migration-datastore-url If you use vSphere Container Storage Plug-in version


3, add this parameter when you migrate in-tree
vSphere volumes to vSphere Container Storage
Plug-in. The parameter allows to honor the default
datastore feature of the in-tree vSphere plug-in.

Note To deploy the vSphere Container Storage Plug-in for block volumes in VMware
Cloud environment, you must enter the cloud administrator username and password in
the vSphere configuration file.

n File volumes.

For file volumes, it is optional to add parameters that specify network permissions
and placement of volumes. Otherwise, default values will be used. Use the following
configuration file as an example.

$ cat /etc/kubernetes/csi-vsphere.conf
[Global]
cluster-id = "<cluster-id>"
cluster-distribution = "<cluster-distribution>"
ca-file = <ca file path> # optional, use with insecure-flag set to false

[NetPermissions "A"]
ips = "*"
permissions = "READ_WRITE"
rootsquash = false

[NetPermissions "B"]
ips = "10.20.20.0/24"
permissions = "READ_ONLY"
rootsquash = true

[NetPermissions "C"]
ips = "10.30.30.0/24"
permissions = "NO_ACCESS"

[NetPermissions "D"]
ips = "10.30.10.0/24"
rootsquash = true

[NetPermissions "E"]
ips = "10.30.1.0/24"

VMware by Broadcom 34
Getting Started with VMware vSphere Container Storage Plug-in

[VirtualCenter "<IP or FQDN>"]


insecure-flag = "<true or false>"
user = "<username>"
password = "<password>"
port = "<port>"
datacenters = "<datacenter1-path>, <datacenter2-path>, ..."

The entries have the following meanings.

File Volume Parameter Description

NetPermissions This parameter is exclusive to file volumes and is


optional. In this sample vSphere configuration file, the
set of parameters restricts the network capabilities
of all file share volumes that are created. If you do
not specify the complete set of NetPermissions for
a given IP range or completely omit the section, the
system uses default values. You can define as many
NetPermissions sections as you want. Each section
can include the following strings:
n Ips: Defines the IP range or IP subnet to which
these restrictions apply. The default value for Ips
is *, which means all IPs.
n Permissions: Defines the permissions level, such
as READ_WRITE, READ_ONLY or NO_ACCESS. The
default value for Permissions is READ_WRITE for
the specified IP range.
n RootSquash: Defines the security access level for
the file share volume. The default for RootSquash
is false. It allows root access to all file share
volumes that are created within the specified IP
range.

Note Do not use "NO_ACCESS" permissions for


IPs "*" or the subnets of the node IPs in the
Kubernetes cluster. Otherwise, the volume created
with this network permissions cannot be used with
the pod. Volume mount will fail with the error
Internal desc = error publish volume to target
path: mount failed: exit status 32...mounting ..
failed, reason given by server: No such file or
directory.

VMware by Broadcom 35
Getting Started with VMware vSphere Container Storage Plug-in

2 Create a Kubernetes secret for vSphere credentials.

a Create the secret by running the following command.

kubectl create secret generic vsphere-config-secret --from-file=csi-vsphere.conf --


namespace=vmware-system-csi

b Verify that the credential secret is successfully created in the vmware-system-csi


namespace.

$ kubectl get secret vsphere-config-secret --namespace=vmware-system-csi


NAME TYPE DATA AGE
vsphere-config-secret Opaque 1 43s

c Delete the configuration file for security purposes.

rm csi-vsphere.conf

Use a Secure Connection for vSphere Container Storage Plug-in


Follow this procedure if you want to use a secure connection instead of using a self-signed
certificate for login.

Prerequisites

Make sure to enter false as a value for the insecure-flag parameter in the vSphere
configuration file. The value indicates that you plan to use a secure connection.

If your environment includes multiple vCenter Server instances, see Use a Secure Connection in
the Environment with Multiple vCenter Server Instances.

Procedure

1 Download trusted root CA certificates from vCenter Server at https://vCenter-IP-


Address/certs/download.zip, extract the download.zip file containing certificates, and
create config-map using the certificate in the certs/lin directory.

$ curl -LO https://vCenter-IP-Address/certs/download.zip


$ unzip download.zip
$ tree certs/
certs/
├── lin
│ ├── 6355e8d1.0
│ └── 6355e8d1.r1
├── mac
│ ├── 6355e8d1.0
│ └── 6355e8d1.r1
└── win
├── 6355e8d1.0.crt
└── 6355e8d1.r1.crl

3 directories, 6 files

VMware by Broadcom 36
Getting Started with VMware vSphere Container Storage Plug-in

2 Create config-map for root-ca certificate.

$ cd certs/lin
$ kubectl create configmap vc-root-ca-cert --from-file=6355e8d1.0 --namespace=vmware-
system-csi
configmap/vc-root-ca-cert created

3 Set the following values for vsphere-config-secret in the vmware-system-csi


namespace.

a Set insecure-flag to false.

[Global]
.
.
insecure-flag = "false"
ca-file = "/etc/ssl/certs/6355e8d1.0"
.

b Update the vCenter Server details to FQDN as shown in the example below.

[Global]
cluster-id = "cluster1"
cluster-distribution = "CSI-Vanilla"

[VirtualCenter "vCenter-FQDN"]
insecure-flag = "false"
ca-file = /certs/lin/1abc830c.0
user = "administrator@vsphere.local"
password = "Admin!444"
port = "555"
datacenters = "VSAN-DC"

[Snapshot]
global-max-snapshots-per-block-volume = 3

[Labels]
topology-categories = "k8s-zone"

4 Mount vc-root-ca-cert configmap as a volume to the CA root location of containers


vsphere-syncer and vsphere-csi-controller in vsphere-csi-controller pod.

Refer to the following change for the vsphere-csi-controller deployment for vsphere-
csi-controller and vsphere-syncer containers.

.
.
containers:
- name: vsphere-csi-controller
volumeMounts:
- mountPath: /etc/ssl/certs/6355e8d1.0
subPath: 6355e8d1.0
name: vc-root-ca-cert

VMware by Broadcom 37
Getting Started with VMware vSphere Container Storage Plug-in

- name: vsphere-syncer
volumeMounts:
- mountPath: /etc/ssl/certs/6355e8d1.0
subPath: 6355e8d1.0
name: vc-root-ca-cert
.
.
volumes:
- name: vc-root-ca-cert
configMap:
name: vc-root-ca-cert
.
.

5 Apply the above change for vsphere-csi-controller deployment and wait for the
vSphere Container Storage Plug-in controller pods to restart.

Use a Secure Connection in the Environment with Multiple vCenter Server


Instances
Follow this procedure if you want to use a secure connection in the environment with multiple
vCenter Server instances.

Prerequisites

n You use environment with multiple vCenter Server instances. See Deploying vSphere
Container Storage Plug-in with Multiple vCenter Server Instances.

n You entered false as a value for the insecure-flag parameter in the vSphere configuration
file. The value indicates that you plan to use a secure connection instead of using a self-
signed certificate for login.

Procedure

1 For each vCenter Server that needs secure connection, download trusted root CA certificates
from https://vCenter-IP-Address/certs/download.zip, extract the download.zip file
containing certificates, and create config-map using the certificate in the certs/lin
directory.

In the following example, a Kubernetes cluster is spread across two vCenter Server instances.
vSphere Container Storage Plug-in needs to establish a secure connection with both
instances.

$ curl -LO https://<vCenter-1-IP-Address>/certs/download.zip


$ unzip download.zip
$ tree certs-vc1/
certs-vc1/
── lin
│ ├── 6355e8d1.0
│ └── 6355e8d1.r1
── mac
│ ├── 6355e8d1.0
│ └── 6355e8d1.r1

VMware by Broadcom 38
Getting Started with VMware vSphere Container Storage Plug-in

── win
├── 6355e8d1.0.crt
└── 6355e8d1.r1.crl 3 directories, 6 files

$ curl -LO https://<vCenter-2-IP-Address>/certs/download.zip


$ unzip download.zip
$ tree certs-vc2/
certs-vc2/
── lin
│ ├── 4135e8d1.0
│ └── 4135e8d1.r1
── mac
│ ├── 4135e8d1.0
│ └── 4135e8d1.r1
── win
├── 4135e8d1.0.crt
└── 4135e8d1.r1.crl 3 directories, 6 files

2 For each vCenter Server instance, create config-map for a root-ca certificate.

$ cd certs-vc1/lin
$ kubectl create configmap vc-1-root-ca-cert --from-file=6355e8d1.0 --namespace=vmware-
system-csi
configmap/vc-1-root-ca-cert created

$ cd certs-vc2/lin
$ kubectl create configmap vc-2-root-ca-cert --from-file=4135e8d1.0 --namespace=vmware-
system-csi
configmap/vc-2-root-ca-cert created

3 For each instance of secure vCenter Server, set insecure-flag to false in the vsphere-
config-secret in the vmware-system-csi namespace.

[VirtualCenter "<VC-1-IP or VC-1-FQDN>"]


insecure-flag = "false"
ca-file = "/etc/ssl/certs/6355e8d1.0"
...

[VirtualCenter "<VC-2-IP or VC-2-FQDN>"]


insecure-flag = "false"
ca-file = "/etc/ssl/certs/4135e8d1.0"
...

4 Mount each config-map created in Step 2 as a volume to the CA root location


of the vsphere-syncer and vsphere-csi-controller containers in the vsphere-csi-
controller pod.

Use the following as an example of the changes for the vsphere-csi-controller


deployment of the vsphere-syncer and vsphere-csi-controller containers.

. .
containers:
- name: vsphere-csi-controller

VMware by Broadcom 39
Getting Started with VMware vSphere Container Storage Plug-in

volumeMounts:
- mountPath: /etc/ssl/certs/6355e8d1.0
subPath: 6355e8d1.0
name: vc-1-root-ca-cert
- mountPath: /etc/ssl/certs/4135e8d1.0
subPath: 4135e8d1.0
name: vc-2-root-ca-cert
- name: vsphere-syncer
volumeMounts:
- mountPath: /etc/ssl/certs/6355e8d1.0
subPath: 6355e8d1.0
name: vc-1-root-ca-cert
- mountPath: /etc/ssl/certs/4135e8d1.0
subPath: 4135e8d1.0
name: vc-2-root-ca-cert
. .
volumes:
- name: vc-1-root-ca-cert
configMap: name: vc-1-root-ca-cert
- name: vc-2-root-ca-cert
configMap: name: vc-2-root-ca-cert
. .

5 Apply the changes you made in Step 4 for vsphere-csi-controller deployment and wait
for the vSphere Container Storage Plug-in controller pods to restart.

Automatic Generation of Cluster IDs in vSphere Container Storage Plug-in


Every Kubernetes cluster in vSphere Container Storage Plug-in contains a unique cluster ID set in
the configuration file. This section contains information about automatic generation of cluster IDs.

If you do not provide the cluster ID field or keep it empty while creating a configuration secret
for vSphere Container Storage Plug-in, it automatically generates a unique cluster ID across all
clusters. vsphere-csi-cluster-id configuration map is created in the namespace where you
have installed vSphere Container Storage Plug-in to store this cluster ID.

Upgrade vSphere Container Storage Plug-in


When you upgrade vSphere Container Storage Plug-in, do not remove the cluster ID attribute if it
is already part of vSphere configuration secret. Otherwise, some of the old volumes will use the
old cluster ID, and new volumes begin to use the new cluster ID. Due to this, PVs cannot be used
which can cause volume operation failures.

Downgrade vSphere Container Storage Plug-in


If you want to downgrade vSphere Container Storage Plug-in version 3.x that uses an
automatically created cluster ID to an older version 2.x, follow these steps.

Procedure

1 Fetch the cluster ID from vsphere-csi-cluster-id configuration map in the namespace


where you have installed vSphere Container Storage Plug-in.

VMware by Broadcom 40
Getting Started with VMware vSphere Container Storage Plug-in

2 Uninstall vSphere Container Storage Plug-in with the new version.

3 When you install vSphere Container Storage Plug-in version 2.x, specify the cluster ID
retrieved in step 1 when you create a vSphere configuration secret.

Install the vSphere Container Storage Plug-in


Install an appropriate version of the vSphere Container Storage Plug-in in your native Kubernetes
environment. After you install the plug-in, you can verify whether the installation is successful.

To deploy your Kubernetes cluster with topology aware provisioning feature, see Topology-
Aware Volume Provisioning.

Prerequisites

Check which Kubernetes version corresponds to a specific version of the vSphere Container
Storage Plug-in. See Compatibility Matrices for vSphere Container Storage Plug-in. For
information about features that different versions of the vSphere Container Storage Plug-in
support, see Supported Kubernetes Functionality.

Procedure

1 Deploy the vSphere Container Storage Plug-in.

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/


v3.0.0/manifests/vanilla/vsphere-csi-driver.yaml

Note To be able to take advantage of the latest bug fixes and feature updates,
make sure to use the most recent version of the vSphere Container Storage Plug-in. For
versions and updates, see Release Notes on the VMware vSphere Container Storage Plug-in
Documentation page.

If you deploy the vSphere Container Storage Plug-in in a single control plane setup, you can
edit the following field to change the number of replicas to one.
If the driver is already deployed, use the kubectl edit command to reduce the number of
replicas.

kind: Deployment
apiVersion: apps/v1
metadata:
name: vsphere-csi-controller
namespace: vmware-system-csi
spec:
replicas: number_of_replicas

VMware by Broadcom 41
Getting Started with VMware vSphere Container Storage Plug-in

2 Verify that the vSphere Container Storage Plug-in has been successfully deployed.

a Verify that vsphere-csi-controller instances run on the control plane node and vsphere-
csi-node instances run on worker nodes of the cluster.

$ kubectl get deployment --namespace=vmware-system-csi


NAME READY AGE
vsphere-csi-controller 1/1 2m58s
$ kubectl get daemonsets vsphere-csi-node --namespace=vmware-system-csi
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE
SELECTOR AGE
vsphere-csi-node 4 4 4 4 4
<none> 3m51s

b Verify that the vSphere Container Storage Plug-in has been registered with Kubernetes.

$ kubectl describe csidrivers


Name: csi.vsphere.vmware.com
Namespace:
Labels: <none>
Annotations: <none>
API Version: storage.k8s.io/v1
Kind: CSIDriver
Metadata:
Creation Timestamp: 2020-04-14T20:46:07Z
Resource Version: 2382881
Self Link: /apis/storage.k8s.io/v1beta1/csidrivers/csi.vsphere.vmware.com
UID: 19afbecd-bc2f-4806-860f-b29e20df3074
Spec:
Attach Required: true
Pod Info On Mount: false
Volume Lifecycle Modes:
Persistent
Events: <none>

c Verify that the CSINodes have been created.

$ kubectl get CSINode


NAME CREATED AT
<k8s-worker1-name> 2020-04-14T12:30:29Z
<k8s-worker2-name> 2020-04-14T12:30:38Z
<k8s-worker3-name> 2020-04-14T12:30:21Z
<k8s-worker4-name> 2020-04-14T12:30:26Z

Results

When the installation is complete, a configmap with the name "internal-feature-


states.csi.vsphere.vmware.com in the vmware-system-csi namespace is installed in the
Kubernetes cluster. This configmap is for internal use only and should not be modified. Any
modifications to this configmap file is not supported.

VMware by Broadcom 42
Getting Started with VMware vSphere Container Storage Plug-in

Deploying vSphere Container Storage Plug-in with Multiple


vCenter Server Instances
To provide higher availability, performance, and scalability, vSphere Container Storage Plug-in
supports deployments with multiple vCenter Server instances.

In a Kubernetes cluster based on a single vCenter Server system, vSphere Container Storage
Plug-in is as highly available as vCenter Server. If vCenter Server fails, vSphere Container Storage
Plug-in stops volume operations. In addition, the performance and throughput of volume life
cycle operations and the scale of volumes are limited to what a single vCenter Server instance
supports.

With the multi-instance vCenter Server functionality, you can stretch the Kubernetes cluster
across multiple vCenter Server instances. This allows you to achieve higher availability,
performance, and scale of persistent volumes. Also, in a multi-zone infrastructure topology, you
can deploy one instance of vCenter Server per availability zone, or fault domain. You can then
stretch the Kubernetes cluster across the availability zones for higher availability, performance,
and scale of persistent volumes.

Advantages of Deployment with Multiple vCenter Server Instances


Deployments with multiple vCenter Server instances offer the following advantages:

n Improved availability. In a multi-zone deployment topology, if an availability zone fails, the


failure affects volume life cycle operations in only that particular availability zone. The life
cycle operations in other availability zones continue.

n Improved performance. In a Kubernetes deployment stretched across multiple vCenter


Server instances, vSphere Container Storage Plug-in has more vCenter Server systems
available for performing volume operations. As a result, the volume operation throughput
increases.

n Improved scale. A single vCenter Server instance supports a maximum of 10k CNS block
volumes. In a Kubernetes deployment stretched across multiple vCenter Server instances,
vSphere Container Storage Plug-in is able to support 10k CNS block volumes per vCenter
Server.

Guidelines for Deployment with Multiple vCenter Server Instances


When you deploy vSphere Container Storage Plug-in in an environment with multiple vCenter
Server instances, follow these guidelines and best practices.

n The deployment supports only native Kubernetes clusters. The deployment does not support
Tanzu Kubernetes Grid clusters.

n The deployment supports only block volume provisioning. File volume provisioning is not
supported.

VMware by Broadcom 43
Getting Started with VMware vSphere Container Storage Plug-in

n Your deployment must be topology-aware. If you haven't used the topology functionality,
you must recreate the entire cluster to be topology-aware. During volume provisioning,
the topology mechanism helps identify and specify nodes for selecting vSphere storage
resources spread across multiple vCenter Server systems.

n Follow all guidelines for deploying vSphere Container Storage Plug-in with topology. For
information, see Deploying vSphere Container Storage Plug-in with Topology.

n Every node VM on each vCenter Server must have a tag. The value of the tag yields a
unique combination of values across all categories mentioned in the topology-categories
parameter. The topology-categories parameter is specified in vSphere configuration secret
on associated vCenter Server.

Node registration fails if the node does not belong to every category under the topology-
categories parameter. Or if the tag values do not generate unique combination across the
different tag categories in associated vCenter Server.

n vSphere Container Storage Plug-in version 3.0 does not support datastores shared across
multiple vCenter Server instances.

n To provision storage based on any specific storage policy, configure the storage policy
for each individual vCenter Server, so that vCenter Server can follow the policy-based
provisioning requirements. Make sure to use the same policy name and the same policy
parameters on all participatingvCenter Server systems.

n Specify configuration details of all configured vCenter Server instances under a separate
VirtualCenter section in the vSphere configuration file.

Example of Topology with Multiple vCenter Server Instances


In the following example, three vCenter Server instances belong to three zones.

vCenter Server Cluster Node VM

vCenter Server - 1 vSphere Cluster ControlPlaneVM1


n Availability Zone Category: k8s- WorkerNodeVM1
zone WorkerNodeVM2
n Tag: zone-1

vCenter Server - 2 vSphere Cluster ControlPlaneVM2


n Availability Zone Category: k8s- WorkerNodeVM3
zone WorkerNodeVM4
n Tag: zone-2

vCenter Server - 3 vSphere Cluster ControlPlaneVM3


n Availability Zone Category: k8s- WorkerNodeVM5
zone WorkerNodeVM6
n Tag: zone-3

VMware by Broadcom 44
Getting Started with VMware vSphere Container Storage Plug-in

Install vSphere Cloud Provider Interface in an Environment with


Multiple vCenter Server Instances
Follow these steps to install the vSphere Cloud Provider Interface in an environment with multiple
vCenter Server instances.

Procedure

1 Download the vsphere-cloud-controller-manager.yaml file.

$ wget https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/release-
VERSION/releases/vVERSION/vsphere-cloud-controller-manager.yaml

Replace VERSION with an appropriate major version of Kubernetes. For example, if the
version is 1.22.x, run the following command:
$ wget https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/
release-1.22/releases/v1.22/vsphere-cloud-controller-manager.yaml

2 Modify the downloaded vsphere-cloud-controller-manager.yaml file to add information


about all vCenter Server instances in the vcenter section.

Use the following as an example.

apiVersion: v1
kind: Secret
metadata:
name: vsphere-cloud-secret
labels:
vsphere-cpi-infra: secret
component: cloud-controller-manager
namespace: kube-system
# NOTE: this is just an example configuration, update with real values based on your
environment
stringData:
VC-1-IP.username: "username"
VC-1-IP.password: "password"
VC-2-IP.username: "username"
VC-2-IP.password: "password"
VC-3-IP.username: "username"
VC-3-IP.password: "password"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: vsphere-cloud-config
labels:
vsphere-cpi-infra: config
component: cloud-controller-manager
namespace: kube-system
data:
# NOTE: this is just an example configuration, update with real values based on your
environment
vsphere.conf: |
# Global properties in this section will be used for all specified vCenters unless

VMware by Broadcom 45
Getting Started with VMware vSphere Container Storage Plug-in

overriden in VirtualCenter section.


global:
port: 443
# set insecureFlag to true if the vCenter uses a self-signed cert
insecureFlag: true
# settings for using k8s secret
secretName: vsphere-cloud-secret
secretNamespace: kube-system

# vcenter section
vcenter:
vc-1:
server: <VC-1-IP or VC-1-FQDN>
user: <username>
password: <password>
datacenters:
- datacenter-path
vc-2:
server: <VC-2-IP or VC-2-FQDN>
user: <username>
password: <password>
datacenters:
- datacenter-path
vc-3:
server: <VC-3-IP or VC-3-FQDN>
user: <username>
password: <password>
datacenters:
- datacenter-path

3 Install the vSphere Cloud Provider Interface.

Follow steps in Install vSphere Cloud Provider Interface.

Deploy vSphere Container Storage Plug-in with Multiple vCenter


Server Instances
To deploy vSphere Container Storage Plug-in in an environment with multiple vCenter Server
instances, follow this procedure.

Prerequisites

Make sure that vSphere Container Storage Plug-in is of version 3.0 or later.

Procedure

1 In the vSphere configuration file, add configuration details for all instances of vCenter Server
under the VirtualCenter section.

For information about the configuration file, see Create a Kubernetes Secret for vSphere
Container Storage Plug-in.

VMware by Broadcom 46
Getting Started with VMware vSphere Container Storage Plug-in

Use the following configuration file as an example for provisioning block volumes in a
vSphere environment with two instances of vCenter Server .

$ cat /etc/kubernetes/csi-vsphere.conf
[Global]
cluster-id = "<cluster-id>"
cluster-distribution = "<cluster-distribution>"

[VirtualCenter "<VC-1-IP or VC-1-FQDN>"]


insecure-flag = "<true or false>"
user = "<username>"
password = "<password>"
port = "<port>"
datacenters = "<datacenter1-path>, <datacenter2-path>, ..."
ca-file = <ca file path> # optional, use with insecure-flag set to false
thumbprint = "<cert thumbprint>" # optional, use with insecure-flag set to false without
providing ca-file

[VirtualCenter "<VC-2-IP or VC-2-FQDN>"]


insecure-flag = "<true or false>"
user = "<username>"
password = "<password>"
port = "<port>"
datacenters = "<datacenter1-path>, <datacenter2-path>, ..."
ca-file = <ca file path> # optional, use with insecure-flag set to false
thumbprint = "<cert thumbprint>" # optional, use with insecure-flag set to false without
providing ca-file

[VirtualCenter "<VC-3-IP or VC-3-FQDN>"]


insecure-flag = "<true or false>"
user = "<username>"
password = "<password>"
port = "<port>"
datacenters = "<datacenter1-path>, <datacenter2-path>, ..."
ca-file = <ca file path> # optional, use with insecure-flag set to false
thumbprint = "<cert thumbprint>" # optional, use with insecure-flag set to false without
providing ca-file

[Labels]
topology-categories = "k8s-zone"

2 Deploy topology-aware vSphere Container Storage Plug-in.

For information about enabling topology when deploying vSphere Container Storage Plug-in,
see Deploy vSphere Container Storage Plug-in with Topology.

3 After the installation, verify the topology-aware setup with multiple vCenter Server instances.

Check that the driver pods of vSphere Container Storage Plug-in are up and running.

VMware by Broadcom 47
Getting Started with VMware vSphere Container Storage Plug-in

Deploying vSphere Container Storage Plug-in with Topology


When you deploy vSphere Container Storage Plug-in in a vSphere environment that includes
multiple data centers or host clusters, you can use zoning. Zoning enables orchestration systems,
such as Kubernetes, to integrate with vSphere storage resources that are not equally available to
all nodes. With zoning, the orchestration system can make intelligent decisions when dynamically
provisioning volumes. The intelligent volume provisioning helps you avoid situations such as
those where a pod cannot start because the storage resource it needs is not accessible.

Guidelines and Best Practices for Deployment with Topology


When you deploy vSphere Container Storage Plug-in using zoning, follow these guidelines and
best practices.

n If you already use vSphere Container Storage Plug-in to run applications, but haven't used
the topology feature, you must re-create the entire cluster and delete any existing PVCs in
the system to be able to use the topology feature.

n Depending on your vSphere storage environment, you can use different deployment
scenarios for availability zones. For example, you can define availability zones per host, host
cluster, data center or have a combination of these.

n Kubernetes assumes that even though the topology labels applied on a node are mutable,
a node will not move between zones without being destroyed and re-created. See https://
kubernetes.io/docs/reference/labels-annotations-taints/#topologykubernetesiozone. As a
result, if you define the availability zones on the host level, make sure to pin the node VMs to
their respective hosts to prevent migration of these VMs to other availability zones. You can
do this by either creating the node VM on the hosts' local datastore or by setting the DRS
VM-Host affinity rules. For information, see VM-Host Affinity Rules.

An exception to this guideline is an active-passive setup that has storage replicated between
two topology domains as specified in the diagram in Deploy Workloads on a Preferential
Datastore in a Topology-Aware Environment. In this case, you can migrate node VMs
temporarily when either of the topology domains is down.

n Distribute your control plane VMs across the availability zones to ensure high availability.

n Have at least one worker node VM in each availability zone. Following this guideline is helpful
when you use a StorageClass with no topology requirements explicitly set, and Kubernetes
randomly selects a topology domain to schedule a pod in it. In such cases, if the topology
domain does not have a worker node, the pod remains in pending state.

n Do not apply topology related vSphere tags to node VMs. vSphere Container Storage Plug-in
cannot read topology labels applied on the nodes.

n Use the topology-categories parameter in the Labels section of the vSphere configuration
file. This parameter adds topology custom labels to the nodes: topology.csi.vmware.com/
category-name.

VMware by Broadcom 48
Getting Started with VMware vSphere Container Storage Plug-in

vSphere Container Storage Plug-in of releases earlier than 2.4 does not support the
topology-categories parameter.

vSphere Secret Configuration Sample Labels on a Node

[Labels] Name: k8s-node-0179


topology-categories = "k8s-region,k8s-zone" Roles: <none>
Labels:
topology.csi.vmware.com/k8s-region=region-1

topology.csi.vmware.com/k8s-zone=zone-a
Annotations: ....

n Each node VM in a topology-aware Kubernetes cluster must belong to a tag under each
category mentioned in the topology-categories parameter in Step 2.a. Node registration fails
if a node does not belong to every category under the topology-categories parameter.

n With vSphere Container Storage Plug-in version 3.1.0 and later, volume provisioning requests
in topology-aware environments attempt to create volumes in datastores accessible to all
hosts under a given topology segment. This includes hosts that do not have Kubernetes node
VMs running on them. For example, if the vSphere Container Storage Plug-in driver gets a
request to provision a volume in zone-a, applied on the Datacenter dc-1, all hosts under dc-1
must have access to the datastore selected for volume provisioning. The hosts include those
that are directly under dc-1 and those that are a part of clusters inside dc-1.

Sample vCenter Server Topology Configuration


In the following example, the vCenter Server environment consists of four clusters across two
regions. Availability zones are defined per data center as well as per host cluster.

Data Center Cluster Node VM

Datacenter1 Cluster1 ControlPlaneVM1


n Availability Zone Category: k8s- n Availability Zone Category: k8s- WorkerNodeVM1
region zone
n Tag: region-1 n Tag: zone-A

Cluster2 ControlPlaneVM2
n Availability Zone Category: k8s- WorkerNodeVM2
zone WorkerNodeVM3
n Tag: zone-B

Cluster3 ControlPlaneVM3
n Availability Zone Category: k8s- WorkerNodeVM4
zone
n Tag: zone-C

Datacenter2 Cluster4 WorkerNodeVM5


n Availability Zone Category: k8s- n Availability Zone Category: k8s- WorkerNodeVM6
region zone
n Tag: region-2 n Tag: zone-D

VMware by Broadcom 49
Getting Started with VMware vSphere Container Storage Plug-in

Deploy vSphere Container Storage Plug-in with Topology


Use this task for a greenfield deployment of vSphere Container Storage Plug-in with topology.

Prerequisites

Note
n With vSphere Container Storage Plug-in version 3.0.2, you can transition from a non-topology
configuration to a topology configuration if there are no Persistent Volumes (PVs) in the
cluster. However, you cannot migrate non-topology setups with PVs to topology setups

n Using vSphere Container Storage Plug-in version 3.0.2, you can migrate non-topology setups
to topology setups without managing internal topology CRs. This simplifies the transition
process if you have already deployed a non-topology setup and want to enable topology
before deploying any workload on the cluster.

n You cannot transition back from a topology setup to a non-topology setup.

To create tags for your zones, ensure that you meet the following prerequisites:

n Have appropriate tagging privileges that control your ability to work with tags.

n Ancestors of node VMs, such as a host, cluster, folder, and data center, must have the
ReadOnly role set for the vSphere user configured to use vSphere Container Storage Plug-in.
This is required to allow reading tags and categories to discover the nodes' topology. For
more information, see vSphere Tagging Privileges in the vSphere Security documentation.

Procedure

1 In the vSphere Client, create zones using vSphere tags.

Follow these naming guidelines:

n The names you use for the tag categories and tags must be 63 characters or less, begin
and end with an alphanumeric character, and contain only dashes, underscores, dots, or
alphanumerics in between.

n Do not use the same name for two different tag categories.

n Tags you create should be unique across topology domains. For example, you cannot
have zone1 under Region1 and zone1 under Region2.

VMware by Broadcom 50
Getting Started with VMware vSphere Container Storage Plug-in

For information, see Create, Edit, or Delete a Tag Category in the vCenter Server and Host
Management documentation.
a Create two tag categories: k8s-zone and k8s-region.

b Under each category, create tags.

Category Tag

k8s-region region-1
region-2

k8s-zone zone-A
zone-B
zone-C
zone-D

c Apply corresponding tags to the data center and clusters as indicated in the following
diagram.

For information, see Assign or Remove a Tag in the vCenter Server and Host
Management documentation.

vSphere Object Tag

Datacenter1 region-1

Datacenter2 region-2

Cluster1 zone-A

Cluster2 zone-B

Cluster3 zone-C

Cluster4 zone-D

VMware by Broadcom 51
Getting Started with VMware vSphere Container Storage Plug-in

2 Enable topology when deploying vSphere Container Storage Plug-in.

a In the vSphere configuration file, add entries to create a topology-aware setup.

For information about the configuration file, see Create a Kubernetes Secret for vSphere
Container Storage Plug-in.

[Labels]
topology-categories = "k8s-region, k8s-zone"

The parameter topology-categories takes in a comma separated list of availability


zones that correspond to the Categories that the vSphere administrator created in Step
1.a. You can add a maximum of five categories in the topology-categories parameter.

b Deploy topology-aware vSphere Container Storage Plug-in.

Make sure the external-provisioner sidecar is deployed with the arguments --feature-
gates=Topology=true and --strict-topology.

To do this, search for the following lines and uncomment them in the YAML file https://
github.com/kubernetes-sigs/vsphere-csi-driver/tree/v3.0.0/manifests/vanilla.

Note To be able to take advantage of the latest bug fixes and feature updates, make
sure to use the most recent version of vSphere Container Storage Plug-in. For versions
and updates, see Release Notes on the VMware vSphere Container Storage Plug-in
Documentation page.

# needed only for topology aware setup


#- "--feature-gates=Topology=true"
#- "--strict-topology"

For information about deploying vSphere Container Storage Plug-in, see Install the
vSphere Container Storage Plug-in.

VMware by Broadcom 52
Getting Started with VMware vSphere Container Storage Plug-in

3 After the installation, verify the topology-aware setup.

a Verify that all csinodes objects include the topologyKeys parameter.

$ kubectl get csinodes -o jsonpath='{range .items[*]}{.metadata.name} {.spec}{"\n"}


{end}'
k8s-control-1
{"drivers":[{"name":"csi.vsphere.vmware.com","nodeID":"k8s-control-1","topologyKeys":
["topology.csi.vmware.com/k8s-region","topology.csi.vmware.com/k8s-zone"]}]}
k8s-control-2
{"drivers":[{"name":"csi.vsphere.vmware.com","nodeID":"k8s-control-2","topologyKeys":
["topology.csi.vmware.com/k8s-region","topology.csi.vmware.com/k8s-zone"]}]}
k8s-control-3
{"drivers":[{"name":"csi.vsphere.vmware.com","nodeID":"k8s-control-3","topologyKeys":
["topology.csi.vmware.com/k8s-region","topology.csi.vmware.com/k8s-zone"]}]}
k8s-node-1 {"drivers":[{"name":"csi.vsphere.vmware.com","nodeID":"k8s-
node-1","topologyKeys":["topology.csi.vmware.com/k8s-region","topology.csi.vmware.com/
k8s-zone"]}]}
k8s-node-2 {"drivers":[{"name":"csi.vsphere.vmware.com","nodeID":"k8s-
node-2","topologyKeys":["topology.csi.vmware.com/k8s-region","topology.csi.vmware.com/
k8s-zone"]}]}
k8s-node-3 {"drivers":[{"name":"csi.vsphere.vmware.com","nodeID":"k8s-
node-3","topologyKeys":["topology.csi.vmware.com/k8s-region","topology.csi.vmware.com/
k8s-zone"]}]}
k8s-node-4 {"drivers":[{"name":"csi.vsphere.vmware.com","nodeID":"k8s-
node-4","topologyKeys":["topology.csi.vmware.com/k8s-region","topology.csi.vmware.com/
k8s-zone"]}]}
k8s-node-5 {"drivers":[{"name":"csi.vsphere.vmware.com","nodeID":"k8s-
node-5","topologyKeys":["topology.csi.vmware.com/k8s-region","topology.csi.vmware.com/
k8s-zone"]}]}
k8s-node-6 {"drivers":[{"name":"csi.vsphere.vmware.com","nodeID":"k8s-
node-6","topologyKeys":["topology.csi.vmware.com/k8s-region","topology.csi.vmware.com/
k8s-zone"]}]}

b Verify that all nodes have the right topology labels set on them.

Your output must look similar to the sample below.

$ kubectl get nodes --show-labels


NAME STATUS ROLES AGE VERSION LABELS
k8s-control-1 Ready control-plane 1d v1.21.1 topology.csi.vmware.com/
k8s-region=region-1,topology.csi.vmware.com/k8s-zone=zone-A
k8s-control-2 Ready control-plane 1d v1.21.1 topology.csi.vmware.com/
k8s-region=region-1,topology.csi.vmware.com/k8s-zone=zone-B
k8s-control-3 Ready control-plane 1d v1.21.1 topology.csi.vmware.com/
k8s-region=region-1,topology.csi.vmware.com/k8s-zone=zone-C
k8s-node-1 Ready <none> 1d v1.21.1 topology.csi.vmware.com/
k8s-region=region-1,topology.csi.vmware.com/k8s-zone=zone-A
k8s-node-2 Ready <none> 1d v1.21.1 topology.csi.vmware.com/
k8s-region=region-1,topology.csi.vmware.com/k8s-zone=zone-B
k8s-node-3 Ready <none> 1d v1.21.1 topology.csi.vmware.com/
k8s-region=region-1,topology.csi.vmware.com/k8s-zone=zone-B
k8s-node-4 Ready <none> 1d v1.21.1 topology.csi.vmware.com/
k8s-region=region-1,topology.csi.vmware.com/k8s-zone=zone-C

VMware by Broadcom 53
Getting Started with VMware vSphere Container Storage Plug-in

k8s-node-5 Ready <none> 1d v1.21.1 topology.csi.vmware.com/


k8s-region=region-2,topology.csi.vmware.com/k8s-zone=zone-D
k8s-node-6 Ready <none> 1d v1.21.1 topology.csi.vmware.com/
k8s-region=region-2,topology.csi.vmware.com/k8s-zone=zone-D

What to do next

Deploy workloads using topology. See Topology-Aware Volume Provisioning.

Upgrading vSphere Container Storage Plug-in


After you install vSphere Container Storage Plug-in in a native Kubernetes cluster, you can
upgrade the plug-in to a newer version.

Procedures in this section apply only to native, also called vanilla, Kubernetes clusters deployed
in vSphere environment. To upgrade vSphere with Tanzu, see vSphere with Tanzu Configuration
and Management.

vSphere Container Storage Plug-in Upgrade Considerations and


Guidelines
When you perform an upgrade, follow these guidelines:

n Be familiar with installation prerequisites and procedures for vSphere Container Storage Plug-
in. See Preparing for Installation of vSphere Container Storage Plug-in and Deploying the
vSphere Container Storage Plug-in on a Native Kubernetes Cluster.

n Ensure that roles and privileges in your vSphere environment are updated. For more
information, see vSphere Roles and Privileges.

n To upgrade to vSphere Container Storage Plug-in 2.3.0, you need DNS forwarding
configuration in CoreDNS ConfigMap to help resolve vSAN file share hostname. For more
information, see Configure CoreDNS for vSAN File Share Volumes.

n If you have RWM volumes backed by file service deployed using vSphere Container Storage
Plug-in, remount the volumes before you upgrade vSphere Container Storage Plug-in.

n When upgrading from Beta topology to GA in vSphere Container Storage Plug-in, follow
these recommendations. For information about deployments with topology, see Deploy
vSphere Container Storage Plug-in with Topology.

n If you have used the topology feature in its Beta version on vSphere Container Storage
Plug-in version 2.3 or earlier, upgrade vSphere Container Storage Plug-in to version 2.4.1
or later to be able to use the GA version of the topology feature.

n If you have used the Beta topology feature and plan to upgrade vSphere Container
Storage Plug-in to version 2.4.1 or later, continue using only the zone and region
parameters.

VMware by Broadcom 54
Getting Started with VMware vSphere Container Storage Plug-in

n If you do not specify Label for a particular topology category while using the zone and
region parameters in the configuration file, vSphere Container Storage Plug-in assumes
the default Beta topology behavior and applies failure-domain.beta.kubernetes.io/XYZ
labels on the node. You do not need to make a mandatory configuration change before
upgrading the driver from Beta topology to GA topology feature.

Earlier vSphere Secret vSphere Secret Configuration Sample Labels on a Node After the
Configuration Before the Upgrade Upgrade

[Labels] [Labels] Name: k8s-


region = k8s-region region = k8s-region node-0179
zone = k8s-zone zone = k8s-zone Roles: <none>
Labels:
failure-
domain.beta.kubernetes.io/
region=region-1

failure-
domain.beta.kubernetes.io/
zone=zone-a
Annotations: ....

n If you intend to use the topology GA labels after upgrading to vSphere Container Storage
Plug-in 2.4.1 or later, make sure you do not have any pre-existing StorageClasses or PV
node affinity rules pointing to the topology beta labels in the environment and then make
the following change in the vSphere configuration secret.

Earlier vSphere Secret vSphere Secret Configuration Sample Labels on a Node After the
Configuration Before the Upgrade Upgrade

[Labels] [Labels] Name: k8s-


region = k8s-region region = k8s-region node-0179
zone = k8s-zone zone = k8s-zone Roles: <none>
[TopologyCategory "k8s- Labels:
region"] topology.kubernetes.io/
Label = region=region-1
"topology.kubernetes.io/
region" topology.kubernetes.io/
[TopologyCategory "k8s- zone=zone-a
zone"] Annotations: ....
Label =
"topology.kubernetes.io/
zone"

Remount ReadWriteMany Volumes Backed by vSAN File Service


If you have RWM volumes backed by vSAN file service, use this procedure to remount the
volumes before you upgrade vSphere Container Storage Plug-in.

This procedure is required if you upgrade from v2.0.1 to v2.3.0.

Note If you upgrade from 2.3.0 to 2.4.0 or later, you do not need to perform these steps. In
addition, upgrades from v2.2.2, v2.1.2, and v2.0.2 to version v2.3.0 or later do not require this
procedure.

VMware by Broadcom 55
Getting Started with VMware vSphere Container Storage Plug-in

When you perform the following steps in a maintenance window, the process might disrupt
active IOs on the file share volumes used by application pods. If you have multiple replicas of the
pod that access the same file share volume, perform the following steps on each mount point
serially to minimize downtime and disruptions.

Note Use this task only when the vSphere Container Storage Plug-in node daemonset runs
as a container. When it runs as a process on the TKGi platform, the task does not apply.
However, you also need to perform these steps when TKGi is upgraded from the pod-based
driver to process-based driver. For more information, see the following documentation at https://
docs.pivotal.io/tkgi/1-12/vsphere-cns.html#uninstall-csi.

Procedure

1 Find all RWM volumes on the cluster.

# kubectl get pv -o wide | grep 'RWX\|ROX'


pvc-7e43d1d3-2675-438d-958d-41315f97f42e 1Gi RWX Delete
Bound default/www-web-0 file-sc 107m Filesystem

2 Find all nodes where RWM volume is attached.

In the following example, the volume is attached and mounted on the k8s-worker3 node.

# kubectl get volumeattachment | grep pvc-7e43d1d3-2675-438d-958d-41315f97f42e


csi-3afe670705e55e0679cba3f013c78ff9603333fdae6566745ea5f0cb9d621b20
csi.vsphere.vmware.com pvc-7e43d1d3-2675-438d-958d-41315f97f42e k8s-worker3
true 22s

3 To discover where the volume is mounted, log in to the k8s-worker3 node VM and use the
following command.

root@k8s-worker3:~# mount | grep pvc-7e43d1d3-2675-438d-958d-41315f97f42e


10.83.28.38:/52d7e15c-d282-3bae-f64d-8851ad9d352c on /var/lib/kubelet/pods/43686ba4-
d765-4378-807a-74049fca39ee/volumes/kubernetes.io~csi/
pvc-7e43d1d3-2675-438d-958d-41315f97f42e/mount type nfs4
(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retra
ns=2,sec=sys,clientaddr=10.244.3.134,local_lock=none,addr=10.83.28.38)

VMware by Broadcom 56
Getting Started with VMware vSphere Container Storage Plug-in

4 Unmount and remount volume at the same location with the same mount options used for
mounting volume.

For this step, you need to pre-install the nfs-common package on the worker VMs.
a Use the unmount -fl command to unmount the volume.

root@k8s-worker3:~#
umount -fl /var/lib/kubelet/pods/43686ba4-d765-4378-807a-74049fca39ee/volumes/
kubernetes.io~csi/pvc-7e43d1d3-2675-438d-958d-41315f97f42e/mount

b Remount the volume with the same mount options used originally.

root@k8s-worker3:~# mount -t nfs4 -o sec=sys,minorversion=1 10.83.28.38:/52d7e15c-


d282-3bae-f64d-8851ad9d352c /var/lib/kubelet/pods/43686ba4-d765-4378-807a-74049fca39ee/
volumes/kubernetes.io~csi/pvc-7e43d1d3-2675-438d-958d-41315f97f42e/mount

5 Confirm the mount point is accessible from the node VM.

root@k8s-worker3:~# mount | grep pvc-7e43d1d3-2675-438d-958d-41315f97f42e


10.83.28.38:/52d7e15c-d282-3bae-f64d-8851ad9d352c on /var/lib/kubelet/pods/43686ba4-
d765-4378-807a-74049fca39ee/volumes/kubernetes.io~csi/
pvc-7e43d1d3-2675-438d-958d-41315f97f42e/mount type nfs4
(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retra
ns=2,sec=sys,clientaddr=10.83.26.244,local_lock=none,addr=10.83.28.38)

root@k8s-worker3:~# ls -la /var/lib/kubelet/pods/43686ba4-d765-4378-807a-74049fca39ee/


volumes/kubernetes.io~csi/pvc-7e43d1d3-2675-438d-958d-41315f97f42e/mount
total 4
drwxrwxrwx 3 root root 0 Aug 9 16:48 .
drwxr-x--- 3 root root 4096 Aug 9 18:40 ..
-rw-r--r-- 1 root root 6 Aug 9 16:48 test

What to do next

After you have remounted all the vSAN file share volumes on the worker VMs, upgrade the
vSphere Container Storage Plug-in by reinstalling its YAML files.

Upgrade vSphere Container Storage Plug-in of a Version Earlier than


2.3.0
If you use vSphere Container Storage Plug-in of a version earlier than 2.3.0, to perform an
upgrade, you first need to uninstall the earlier version. You then install a version of your choice.

The following example illustrates an upgrade of vSphere Container Storage Plug-in from v2.2.0 to
v2.3.0.

VMware by Broadcom 57
Getting Started with VMware vSphere Container Storage Plug-in

Procedure

1 Uninstall the existing version of vSphere Container Storage Plug-in using https://github.com/
kubernetes-sigs/vsphere-csi-driver/tags.

kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/


v2.2.0/manifests/v2.2.0/deploy/vsphere-csi-controller-deployment.yaml
kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/
v2.2.0/manifests/v2.2.0/deploy/vsphere-csi-node-ds.yaml
kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/
v2.2.0/manifests/v2.2.0/rbac/vsphere-csi-controller-rbac.yaml
kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/
v2.2.0/manifests/v2.2.0/rbac/vsphere-csi-node-rbac.yaml

After you run the above commands, wait for the vSphere Container Storage Plug-in controller
pod and vSphere Container Storage Plug-in node pods to be deleted completely.

2 Install vSphere Container Storage Plug-in of your choice, for example, v2.3.0.

a Create a new namespace for vSphere Container Storage Plug-in.

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/


v2.3.0/manifests/vanilla/namespace.yaml

b Copy vsphere-config-secret secret from kube-system namespace to the new namespace


vmware-system-csi.

kubectl get secret vsphere-config-secret --namespace=kube-system -o yaml | sed 's/


namespace: .*/namespace: vmware-system-csi/' | kubectl apply -f -

c Delete vsphere-config-secret secret from kube-system namespace.

kubectl delete secret vsphere-config-secret --namespace=kube-system

d Install vSphere Container Storage Plug-in.

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/


v2.3.0/manifests/vanilla/vsphere-csi-driver.yaml

Upgrade vSphere Container Storage Plug-in of a Version 2.3.0 or


Later
If you use vSphere Container Storage Plug-in of a version 2.3.0 or later, you can perform rolling
upgrades.

Procedure

1 If needed, apply changes to vsphere-config-secret in the vmware-system-csi namespace.

2 Apply any necessary changes to the manifest pertaining to the release that you wish to use.

For example, adjust the replica count in the vsphere-csi-controller deployment depending
upon the number of control plane nodes in the cluster.

VMware by Broadcom 58
Getting Started with VMware vSphere Container Storage Plug-in

3 To upgrade vSphere Container Storage Plug-in 2.3.0 or later, run the following command.

The following example uses 3.0.0 as a target version, but you can substitute it with any other
version later than 2.3.0.

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/


v3.0.0/manifests/vanilla/vsphere-csi-driver.yaml

Note To be able to take advantage of the latest bug fixes and feature updates, make sure to
use the most recent version of vSphere Container Storage Plug-in. For versions and updates,
see Release Notes on the VMware vSphere Container Storage Plug-in Documentation page.

Enable Volume Snapshot and Restore After an Upgrade to Version


2.5.x or Later
If you have upgraded vSphere Container Storage Plug-in from version 2.4.x to version 2.5.x or
later, you can enable the volume snapshot and restore feature.

Procedure

1 If you haven't previously enabled the snapshot feature and installed snapshot components,
perform the following steps:

a Enable Volume Snapshot and Restore

b Configure Maximum Number of Snapshots per Volume

2 If you have enabled the snapshot feature or if any snapshot components exist in the setup,
follow these steps:

a Manually upgrade snapshot-controller and snapshot-validation-deployment to version


5.0.1.

For information, see https://github.com/kubernetes-csi/external-snapshotter.

b Enable Volume Snapshot and Restore

c Configure Maximum Number of Snapshots per Volume

Migrating In-Tree vSphere Volumes to vSphere Container


Storage Plug-in
You can migrate in-tree vSphere volumes to vSphere Container Storage Plug-in. After you
migrate the in-tree vSphere volumes, vSphere Container Storage Plug-in performs all subsequent
operations on migrated volumes.

VMware by Broadcom 59
Getting Started with VMware vSphere Container Storage Plug-in

vSphere Container Storage Plug-in and CNS provide functionality that is not available with the
in-tree vSphere volume plug-in. For information, see Supported Kubernetes Functionality and
vSphere Functionality Supported by vSphere Container Storage Plug-in.

Note
n Migration of in-tree vSphere volumes to CSI does not work with Kubernetes version 1.29.0.
See https://github.com/kubernetes/kubernetes/issues/122340.

Use Kubernetes version 1.29.1 or later.

n Kubernetes will deprecate the in-tree vSphere volume plug-in, and it will be removed in the
future Kubernetes releases. Volumes provisioned using the vSphere in-tree plug-in will not
get additional new features supported by the vSphere Container Storage Plug-in.

n Kubernetes provides a seamless procedure to help migrate in-tree vSphere volumes to a


vSphere Container Storage Plug-in. After you migrate the in-tree vSphere volumes vSphere
Container Storage Plug-in, all subsequent operations on migrated volumes are performed by
the vSphere Container Storage Plug-in. The migrated vSphere volume will not get additional
capabilities supported by vSphere Container Storage Plug-in.

Considerations for Migration of In-Tree vSphere Volumes


When you prepare to use the vSphere Container Storage Plug-in migration, consider the
following items.

n vSphere version 7.0 p07 and vSphere version 8.0 Update 2 or later is recommended for
In-tree vSphere volume migration to vSphere Container Storage Plug-in.

n vSphere Container Storage Plug-in migration is released as a Beta feature in Kubernetes 1.19.
For more information, see Release note announcement.

n If you plan to use CSI version 3.0 or 3.1 for migrated volumes, use the latest patch version
3.0.3 or version 3.1.1. These patch versions include the fix for the issue https://github.com/
kubernetes-sigs/vsphere-csi-driver/issues/2534. This issue occurs when both CSI migration
and list-volumes functionality are enabled.

n Kubernetes 1.19 release deprecates vSAN raw policy parameters for the in-tree vSphere
volume plug-in. These parameters will be removed in a future release. For more information,
see Deprecation Announcement.

n The following vSphere in-tree StorageClass parameters are not supported after you enable
migration:

n hostfailurestotolerate

n forceprovisioning

n cachereservation

n diskstripes

n objectspacereservation

VMware by Broadcom 60
Getting Started with VMware vSphere Container Storage Plug-in

n iopslimit

n diskformat

n You cannot rename or delete the storage policy consumed by an in-tree vSphere volume.
Volume migration requires the original storage policy used for provisioning the volume to be
present on vCenter Server for registration of volume as a container volume in vSphere.

n Do not rename the datastore consumed by in-tree vSphere volume. Volume migration relies
on the original datastore name present on the volume source for registration of volume as
container volume in vSphere.

n Make sure to add the following annotations before enabling migration for statically created
vSphere in-tree Persistent Volume Claims, and Persistent Volumes. Statically provisioned in-
tree vSphere volumes cannot be migrated to the vSphere Container Storage Plug-in without
adding these annotations. This also applies to new static in-tree PVs and PVCs created after
you enable migration.

Annotation on PV:

annotations:
pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume

Annotation on PVC:

annotations:
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume

n After migration, if you use vSphere releases earlier than 8.0 Update 1, the only supported
value for diskformat parameter is thin. Volumes created before the migration with the disk
format eagerzeroedthick or zeroedthick are migrated to CSI.

Starting with vSphere 8.0 Update 1, you can use storage policies with thick volume
requirement to migrate eagerzeroedthick or zeroedthick volumes. For more information, see
Create a VM Storage Policy for VMFS Datastore in the vSphere Storage documentation.

n vSphere Container Storage Plug-in does not support raw vSAN policy parameters. After you
enable the migration, vSphere Container Storage Plug-in fails the volume creation activity
when you request a new volume using in-tree provisioner and vSAN raw policy parameters.

n The vSphere Container Storage Plug-in migration requires a compatible version of vSphere.
For information, see Supported Kubernetes Functionality.

n vSphere Container Storage Plug-in does not support formatting volumes with the Windows
file system. In-tree vSphere volumes migrated using the Windows file system cannot be used
with the vSphere Container Storage Plug-in.

n In-tree vSphere volume plug-in relies on the name of the datastore set on the PVs source.
After you enable migration, do not enable Storage DRS or vMotion. If Storage DRS moves a
disk from one datastore to another, further volume operations might break.

VMware by Broadcom 61
Getting Started with VMware vSphere Container Storage Plug-in

n If you use zone and region aware in-tree deployments, upgrade to vSphere Container
Storage Plug-in version 2.4.1 and later.

Before installing vSphere Container Storage Plug-in, add the following section to the vSphere
secret configuration.

vSphere Container Sample Labels on vSphere


In-Tree vSphere Secret Storage Plug-in Secret Container Storage Plug-in
Kubernetes Version Configuration Configuration Nodes after Installation

1.21.x and below [Labels] [Labels] Name:


Earliest version is 1.19 region = k8s- region = k8s- k8s-node-0179
region region Roles:
zone = k8s-zone zone = k8s-zone <none>
Labels:
failure-
domain.beta.kuberne
tes.io/
region=region-1

failure-
domain.beta.kuberne
tes.io/zone=zone-a
Annotations: ....

1.22.x and 1.23.x [Labels] [Labels] Name:


region = k8s- region = k8s- k8s-node-0179
If all of your existing PVs
region region Roles:
have the GA label, use this zone = k8s-zone zone = k8s-zone <none>
approach. [TopologyCategory Labels:
"k8s-region"]
Label = topology.kubernetes
"topology.kubernet .io/region=region-1
es.io/region"
[TopologyCategory
"k8s-zone"] topology.kubernetes
Label = .io/zone=zone-a
"topology.kubernet Annotations: ....
es.io/zone"

1.24.x [Labels] [Labels] Name:


region = k8s- region = k8s- k8s-node-0179
If the Kubernetes cluster
region region Roles:
has PVs with either beta or zone = k8s-zone zone = k8s-zone <none>
GA labels, you can migrate [TopologyCategory Labels:
to vSphere Container "k8s-region"]
Storage Plug-in using the Label = topology.kubernetes
"topology.csi.vmwa .io/region=region-1
following configuration.
re.com/region"
[TopologyCategory
"k8s-zone"] topology.kubernetes
Label = .io/zone=zone-a
"topology.csi.vmwa Annotations: ....
re.com/zone"

For information about deployments with topology, see Deploy vSphere Container Storage
Plug-in with Topology.

VMware by Broadcom 62
Getting Started with VMware vSphere Container Storage Plug-in

n vSphere Container Storage Plug-in version 3.0 provides a new migration-datastore-url


parameter in the vSphere configuration secret. The parameter allows to honor the default
datastore feature of the in-tree vSphere plug-in.

Enable Migration of In-Tree vSphere Volumes to vSphere Container


Storage Plug-in
Migrate the in-tree vSphere volumes to vSphere Container Storage Plug-in.

Prerequisites

Make sure to use compatible versions of vSphere and Kubernetes. See Supported Kubernetes
Functionality.

Procedure

1 Install vSphere Cloud Provider Interface (CPI).

For more information, see Install vSphere Cloud Provider Interface.

2 Install vSphere Container Storage Plug-in with csi-migration set to true.

The following sample deployment YAML file uses version 2.4, but you can substitute it with
a later version of your choice. https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/
release-2.4/manifests/vanilla/vsphere-csi-driver.yaml.

apiVersion: v1
data:
"csi-migration": "true"
.........
kind: ConfigMap
metadata:
name: internal-feature-states.csi.vsphere.vmware.com
namespace: vmware-system-csi

3 Install the admission webhook.

vSphere Container Storage Plug-in does not support provisioning of a volume by specifying
migration specific parameters in the StorageClass. These parameters are added by the
vSphere Container Storage Plug-in translation library, and should not be used in the storage
class directly.

The Validating admission controller prevents you from creating or updating StorageClass
using csi.vsphere.vmware.com as provisioner with these parameters:

n csimigration

n datastore-migrationparam

n diskformat-migrationparam

n hostfailurestotolerate-migrationparam

n forceprovisioning-migrationparam

VMware by Broadcom 63
Getting Started with VMware vSphere Container Storage Plug-in

n cachereservation-migrationparam

n diskstripes-migrationparam

n objectspacereservation-migrationparam

n iopslimit-migrationparam

In addition, the Validating admission controller prevents you from creating or updating
StorageClass using kubernetes.io/vsphere-volume as provisioner with AllowVolumeExpansion
set to true.

As a prerequisite, you must install kubectl, openssl, and base64 commands on the system,
from which you invoke admission webhook installation scripts.

To deploy the admission webhook, download and execute the following file. If needed,
substitute the version number with a version of choice.

https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/release-2.4/manifests/vanilla/
deploy-vsphere-csi-validation-webhook.sh

./deploy-vsphere-csi-validation-webhook.sh
creating certs in tmpdir /var/folders/vy/_6dvxx7j5db9sq9n38qjymwr002gzv/T/tmp.ogj5ioIk
Generating a 2048 bit RSA private key
...........................................................................................
...............................................................+++
...........................................+++
writing new private key to '/var/folders/vy/_6dvxx7j5db9sq9n38qjymwr002gzv/T/tmp.ogj5ioIk/
ca.key'
-----
Generating RSA private key, 2048 bit long modulus
..............................................................+++
...........+++
e is 65537 (0x10001)
Signature ok
subject=/CN=vsphere-webhook-svc.vmware-system-csi.svc
Getting CA Private Key
secret "vsphere-webhook-certs" deleted
secret/vsphere-webhook-certs created
service/vsphere-webhook-svc created
validatingwebhookconfiguration.admissionregistration.k8s.io/
validation.csi.vsphere.vmware.com created
serviceaccount/vsphere-csi-webhook created
role.rbac.authorization.k8s.io/vsphere-csi-webhook-role created
rolebinding.rbac.authorization.k8s.io/vsphere-csi-webhook-role-binding created
deployment.apps/vsphere-csi-webhook created

4 On all control plane nodes, enable CSIMigration and CSIMigrationvSphere parameters on


kube-controller and kubelet.

CSIMigrationvSphere flag enables shims and translation logic to route volume operations
from the vSphere in-tree volume plug-in to vSphere Container Storage Plug-in. It also
supports falling back to in-tree vSphere plug-in if a node does not have vSphere Container
Storage Plug-in installed and configured.

VMware by Broadcom 64
Getting Started with VMware vSphere Container Storage Plug-in

CSIMigrationvSphere requires CSIMigration feature flag to be enabled. This flag enables the
vSphere Container Storage Plug-in migration on the Kubernetes cluster.
a Update kube-controller-manager manifest file and add the following arguments.

The file is available at /etc/kubernetes/manifests/kube-controller-manager.yaml.

`- --feature-gates=CSIMigration=true,CSIMigrationvSphere=true`

b Update kubelet configuration file and add the following flags.

The file is available at /var/lib/kubelet/config.yaml.

featureGates:
CSIMigration: true
CSIMigrationvSphere: true

c Restart the kubelet on the control plane nodes using the following command.

systemctl restart kubelet

d Verify that the kubelet is functioning correctly using the following command

systemctl status kubelet

e For any issues with the kubelet, check the logs on the control plane node using the
following command.

journalctl -xe

VMware by Broadcom 65
Getting Started with VMware vSphere Container Storage Plug-in

5 Enable CSIMigration and CSIMigrationvSphere feature flags on kubelet at all workload


nodes.

a Before you change the configuration on the kubelet on each workload node, drain the
nodes by removing running application workloads.

The following is a node drain example.

$ kubectl drain k8s-node1 --force --ignore-daemonsets


node/k8s-node1 cordoned
WARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job,
DaemonSet or StatefulSet: default/vcppod; ignoring DaemonSet-managed Pods: kube-
system/kube-flannel-ds-amd64-gs7fr, kube-system/kube-proxy-rbjx4, kube-system/vsphere-
csi-node-fh9f6
evicting pod default/vcppod
pod/vcppod evicted
node/k8s-node1 evicted

b To enable migration on the workload nodes, update the kubelet configuration file and add
the folowing flags. The file is available at /var/lib/kubelet/config.yaml.

featureGates:
CSIMigration: true
CSIMigrationvSphere: true

c Restart the kubelet on the workload nodes using the following command.

systemctl restart kubelet

d Verify that the kubelet is functioning correctly using the following command.

systemctl status kubelet

e For any issues with the kubelet, check the logs on the workload node using the following
command.

journalctl -xe

VMware by Broadcom 66
Getting Started with VMware vSphere Container Storage Plug-in

f After you enable the migration, ensure the csinodes instance for the node is updated with
the storage.alpha.kubernetes.io/migrated-plugins annotation.

$ kubectl describe csinodes k8s-node1


Name: k8s-node1
Labels: <none>
Annotations: storage.alpha.kubernetes.io/migrated-plugins: kubernetes.io/
vsphere-volume
CreationTimestamp: Wed, 29 Apr 2020 17:51:35 -0700
Spec:
Drivers:
csi.vsphere.vmware.com:
Node ID: k8s-node1
Events: <none>

Note
n Do not uncordon the workload node unless migrated plugins annotation on the
CSINode object for the workload node does not list kubernetes.io/vsphere-volume.

n If a node is uncordoned before it is migrated to vSphere Container Storage Plug-in, a


volume is attached using the in-tree plugin. If there is a request for the volume to be
attached to vSphere Container Storage Plug-in, it will fail as it is already attached to
the workload VM using the in-tree plugin.

g Uncordon the node after the CSINode object for the node lists kubernetes.io/vsphere-
volume as migrated-plugins.

kubectl uncordon k8s-node1

h Repeat the above steps for all workload nodes in the Kubernetes cluster.

6 (Optional) You can enable the CSIMigrationvSphereComplete flag if you enabled the vSphere
Container Storage Plug-in migration on all nodes.

CSIMigrationvSphereComplete helps you to stop registering the vSphere in-tree plug-in


in kubelet and volume controllers. It also enables shims and translation logic to route
volume operations from the vSphere in-tree plug-in to vSphere Container Storage Plug-
in. The CSIMigrationvSphereComplete flag requires you to enable the CSIMigration and
CSIMigrationvSphere feature flags. Also, you must install and configure vSphere Container
Storage Plug-in on all nodes in the cluster.

VMware by Broadcom 67
Getting Started with VMware vSphere Container Storage Plug-in

7 Verify that the vSphere in-tree PVCs and PVs are migrated to vSphere Container Storage
Plug-in and the pv.kubernetes.io/migrated-to: csi.vsphere.vmware.com annotations are
present on PVCs and PVs

Annotations on PVCs:

Annotations: pv.kubernetes.io/bind-completed: yes


pv.kubernetes.io/bound-by-controller: yes
pv.kubernetes.io/migrated-to: csi.vsphere.vmware.com
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume

Annotations on PVs:

Annotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner


pv.kubernetes.io/bound-by-controller: yes
pv.kubernetes.io/migrated-to: csi.vsphere.vmware.com
pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume

After you enable the migration, vSphere Container Storage Plug-in creates a new in-tree
vSphere volume. It can be identified by using the following annotations. The PV specification
continues to hold the vSphere volume path. If you want to deactivate the migration, the
provisioned volume can be used by the in-tree vSphere plug-in.

Annotations on PVCs:

Annotations: pv.kubernetes.io/bind-completed: yes


pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: csi.vsphere.vmware.com

Annotations on PVs:

Annotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com

Deploying vSphere Container Storage Plug-in on Windows


This section describes information about deploying vSphere Container Storage Plug-in on
Windows.

Guidelines and Requirements


Your environment must meet general requirements that apply to vSphere Container Storage
Plug-in. For more information, see Preparing for Installation of vSphere Container Storage Plug-in.

In addition, follow these requirements to deploy vSphere Container Storage Plug-in with
Windows.

n Kubernetes version 1.20 or higher.

n vSphere Container Storage Plug-in version 3.0.0 or later.

VMware by Broadcom 68
Getting Started with VMware vSphere Container Storage Plug-in

n CSI Proxy v1. Install CSI Proxy on all Windows nodes. For more information, see https://
github.com/kubernetes-csi/csi-proxy#installation

n Control plane node run on Linux.

n Worker nodes must have Windows Server 2019. Other versions of Windows Server are not
supported. Windows worker nodes is supported on the following builds.

n Windows-ltsc2022

n Windows-20h2

n Windows-1809

n Containerd version must be 1.5 or higher if you use it in nodes. For more information, see
containerd fails to add a disk mount on Windows.

Enable vSphere Container Storage Plug-in with Windows Nodes


Enable vSphere Container Storage Plug-in using Windows nodes on the Kubernetes cluster.

Note When a pod is created and the volume does not have a filesystem created on it, the
filesystem type supplied from the StorageClass is ignored and the volume gets formatted with
the NTFS file system.

Procedure

1 Install vSphere Container Storage Plug-in version 3.0.0.

For more information, see Install the vSphere Container Storage Plug-in.

2 Verify that the Windows daemonset is available.

$ kubectl get daemonsets vsphere-csi-node-windows --namespace=vmware-system-csi


NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE
SELECTOR AGE
vsphere-csi-node-windows 1 1 1 1 1
kubernetes.io/os=windows 7m10s

VMware by Broadcom 69
Getting Started with VMware vSphere Container Storage Plug-in

3 Create a storage class.

a Define a StorageClass example-windows-sc.yaml.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-windows-sc
provisioner: csi.vsphere.vmware.com
allowVolumeExpansion: true # Optional: only applicable to vSphere 7.0U1 and above
parameters:
storagepolicyname: "vSAN Default Storage Policy" # Optional Parameter
#datastoreurl: "ds:///vmfs/volumes/vsan:52cdfa80721ff516-ea1e993113acfc77/" #
Optional Parameter
#csi.storage.k8s.io/fstype: "ntfs" # Optional Parameter

Note csi.storage.k8s.io/fstype is an optional parameter. Using Windows file system,


you can only set ntfs as its value since vSphere Container Storage Plug-in only supports
NTFS file system on Windows nodes.

b Define a PersistentVolumeClaim request example-windows-pvc.yaml.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-windows-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: example-windows-sc
volumeMode: Filesystem

Note The pvc definition for Windows is the same as for Linux. You can specify only
ReadWriteOnce as access modes.

c Import the PersistentVolumeClaim into Vanilla Kubernetes cluster:

kubectl create -f example-windows-pvc.yaml

VMware by Broadcom 70
Getting Started with VMware vSphere Container Storage Plug-in

d Verify that the PersistentVolume is created successfully.

The Status must display as Bound.

$ kubectl describe pvc example-windows-pvc


Name: example-windows-pvc
Namespace: default
StorageClass: example-windows-sc
Status: Bound
Volume: pvc-e64e0716-ff63-47b8-8ee4-d1eb4f86dcd7
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: csi.vsphere.vmware.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 5Gi
Access Modes: RWO
VolumeMode: Filesystem
Used By: example-windows-pod
Events: <none>

e Create a Windows pod specification example-windows-pod.yaml.

In the specification, reference the example-windows-pvc that you created earlier.

apiVersion: v1
kind: Pod
metadata:
name: example-windows-pod
spec:
nodeSelector:
kubernetes.io/os: windows
containers:
- name: test-container
image: mcr.microsoft.com/windows/servercore:ltsc2019
command:
- "powershell.exe"
- "-Command"
- "while (1) { Add-Content -Encoding Ascii C:\\test\\data.txt $(Get-Date
-Format u); sleep 1 }"
volumeMounts:
- name: test-volume
mountPath: "/test/"
readOnly: false
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: example-windows-pvc

VMware by Broadcom 71
Getting Started with VMware vSphere Container Storage Plug-in

f Create the pod.

kubectl create -f example-windows-pod.yaml

g Verify that the pod is created successfully.

$ kubectl get pod example-windows-pod


NAME READY STATUS RESTARTS AGE
example-windows-pod 1/1 Running 0 4m13s

In the above example, example-windows-pvc is formatted as NTFS file system and is


mounted to C:\test directory.

VMware by Broadcom 72
Using vSphere Container Storage
Plug-in 3
vSphere Container Storage Plug-in for Kubernetes supports a number of different Kubernetes
features that include block volumes, file volumes, volume expansion, and so on.

Read the following topics next:

n Provisioning Block Volumes with vSphere Container Storage Plug-in

n Using Raw Block Volumes with vSphere Container Storage Plug-in

n Provisioning File Volumes with vSphere Container Storage Plug-in

n Topology-Aware Volume Provisioning

n Provisioning Volumes in an Environment with Multiple vCenter Server Instances

n Expanding a Volume with vSphere Container Storage Plug-in

n Deploy Kubernetes and Persistent Volumes on a vSAN Stretched Cluster

n Using vSphere Container Storage Plug-in for HCI Mesh Deployment

n Volume Snapshot and Restore

n Collecting Metrics with Prometheus to Monitor vSphere Container Storage Plug-in

n Collect vSphere Container Storage Plug-in Logs

Provisioning Block Volumes with vSphere Container Storage


Plug-in
vSphere Container Storage Plug-in supports dynamic provisioning and static provisioning of
block volumes in a native, also called vanilla, Kubernetes cluster that you deploy in vSphere
environment.

Dynamically Provision a Block Volume with vSphere Container


Storage Plug-in
vSphere Container Storage Plug-in supports dynamic volume provisioning for block volumes.
When you use the dynamic volume provisioning, you can create storage volumes on demand in
native Kubernetes clusters.

VMware by Broadcom 73
Getting Started with VMware vSphere Container Storage Plug-in

When you provision volumes dynamically, consider the following items:

n Block volumes can be provisioned using ReadWriteOnce access mode in the


PersistentVolumeClaim specification. Volumes created using PersistentVolumeClaim with
ReadWriteOnce access mode should be used by a single pod. vSphere Container Storage
Plug-in does not support creating multiple pods using the PersistentVolumeClaim with
ReadWriteOnce access mode.

n Dynamic volume provisioning allows you to create storage volumes on demand.

n Without dynamic volume provisioning, cluster administrators have to manually make calls
to their cloud or storage provider to create new storage volumes, and then create
PersistentVolume objects to represent them in Kubernetes.

n With dynamic volume provisioning, cluster administrators do not need to pre-provision


backing storage. Instead, the dynamic provisioning automatically provisions storage when
it is requested by users.

n The implementation of dynamic volume provisioning is based on the API object StorageClass
from the API group storage.k8s.io.

n A cluster administrator can define as many StorageClass objects as required. Each


StorageClass can specify a volume plug-in provisioner that provisions a volume, and a set of
parameters.

n A cluster administrator can define and expose multiple types of storage within a cluster by
using custom set of parameters. Types of storage can be from the same or different storage
systems.

For more information on provisioning volumes using topology and use of the
WaitForFirstConsumer volumeBinding mode with dynamic volume provisioning, see Topology-
Aware Volume Provisioning.

Note Support for volume topology is present only in Vanilla Kubernetes for single-access (RWO)
file system based volume.

You can provision a PersistentVolume dynamically on a native Kubernetes cluster by using the
following steps.

Procedure

1 Define a Storage Class.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-sc
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.vsphere.vmware.com
parameters:

VMware by Broadcom 74
Getting Started with VMware vSphere Container Storage Plug-in

storagepolicyname: "vSAN Default Storage Policy" #Optional Parameter


# datastoreurl: "ds:///vmfs/volumes/vsan:52cdfa80721ff516-ea1e993113acfc77/" #Optional
Parameter
# csi.storage.k8s.io/fstype: "ext4" #Optional Parameter

2 Import this StorageClass into the native Kubernetes cluster.

kubectl create -f example-sc.yaml

3 Define a PersistentVolumeClaim request.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: example-sc

4 Import the PersistentVolumeClaim into the Vanila Kubernetes cluster.

kubectl create -f example-pvc.yaml

5 Verify that the PersistentVolumeClaim has been created and has a PersistentVolume
attached to it.

$ kubectl describe pvc example-pvc


Name: example-pvc
Namespace: default
StorageClass: example-sc
Status: Bound
Volume: pvc-7ed39d8e-7896-11ea-a119-005056983fec
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: csi.vsphere.vmware.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 5Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age
From
Message
---- ------ ----
----
-------
Normal Provisioning 50s csi.vsphere.vmware.com_vsphere-
csi-controller-7777666589-jpnqh_798e6967-2ce1-486f-9c21-43d9dea709ae External provisioner

VMware by Broadcom 75
Getting Started with VMware vSphere Container Storage Plug-in

is provisioning volume for claim "default/example-pvc"


Normal ExternalProvisioning 20s (x3 over 50s) persistentvolume-
controller
waiting for a volume to be created, either by external provisioner
"csi.vsphere.vmware.com" or manually created by system administrator
Normal ProvisioningSucceeded 8s csi.vsphere.vmware.com_vsphere-
csi-controller-7777666589-jpnqh_798e6967-2ce1-486f-9c21-43d9dea709ae Successfully
provisioned volume pvc-7ed39d8e-7896-11ea-a119-005056983fec

When successfully created, the Status section shows Bound and the Volume field is populated.

A PersistentVolume is automatically created, and is bound to this PersistentVolumeClaim.

In this example, RWO access mode indicates that the volume is created on a vSphere virtual
disk (First Class Disk).

6 Verify that PersistentVolume has been successfully created for the PersistentVolumeClaim.

$ kubectl describe pv pvc-7ed39d8e-7896-11ea-a119-005056983fec


Name: pvc-7ed39d8e-7896-11ea-a119-005056983fec
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
Finalizers: [kubernetes.io/pv-protection]
StorageClass: example-sc
Status: Bound
Claim: default/example-pvc
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 5Gi
Node Affinity: <none>
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: csi.vsphere.vmware.com
VolumeHandle: e4073a6d-642e-4dff-8f4a-b4e3a47c4bbd
ReadOnly: false
VolumeAttributes: storage.kubernetes.io/
csiProvisionerIdentity=1586239648866-8081-csi.vsphere.vmware.com
type=vSphere CNS Block Volume
Events: <none>

If successfully created, PersistentVolume is available in the output, and VolumeHandle key is


populated.

The Status must display as Bound. You can also see that the Claim is set to the above
PersistentVolumeClaim name example-pvc.

VMware by Broadcom 76
Getting Started with VMware vSphere Container Storage Plug-in

Node VM Placement and Datastore Selection for Volume


Provisioning
When a storage policy is not specified in the StorageClass, the vSphere Container Storage Plug-
in looks for a shared, accessible datastore that can be accessed by all nodes in the Kubernetes
cluster.

Note Information on this page applies only to vSphere Container Storage Plug-in versions 2.4.1
to 3.0.0. It does not apply to version 3.1.0 and later.

To determine which datastore can be accessed by all nodes, the vSphere Container Storage
Plug-in identifies the ESXi hosts where the nodes are placed. It then identifies the datastores
that are mounted on those ESXi hosts. The vSphere Container Storage Plug-in supplies this
information to the CreateVolume API, which selects the datastore with the highest capacity from
the supplied datastores for volume provisioning.

However, if the nodes are not distributed across all ESXi hosts in the vSphere cluster and are
instead placed on a subset of ESXi hosts, and if that subset of ESXi hosts has some shared
datastores, the volume might get provisioned on those datastores. Later, when you add a new
node to another ESXi host that does not have access to the shared datastore accessible to the
subset of ESXi hosts, the provisioned volume cannot be used on the newly added node.

This situation also applies to topology-aware setups. For example, when an availability zone has
only a single node, the volume might get provisioned on the ESXi host where the node is located.
Later, when you add a new node to the availability zone, the volume provisioned on the local
datastore cannot be used with the new node.

To avoid this situation, use a storage policy in the StorageClass to select a datastore, so that the
cluster can be scaled without any issues. For example, if you have several nodes in the vSphere
cluster and you want to use a datastore that is accessible to all ESXi hosts in the cluster, define
a storage policy that is compliant with that datastore. Then specify the storage policy in the
StorageClass when provisioning a volume. As a result, you can avoid provisioning the volume on
a datastore shared among a few ESXi hosts and a local datastore. And new nodes can be added
easily.

Statically Provision a Block Volume with vSphere Container Storage


Plug-in
vSphere Container Storage Plug-in supports static provisioning for block volumes in native
Kubernetes clusters.

Static provisioning is a feature native to Kubernetes. With static provisioning, cluster


administrators can make existing storage devices available to a cluster. If you have an existing
persistent storage device in your vCenter Server, you can use static provisioning to make the
storage instance available to your cluster.

As a cluster administrator, you must know the details of the storage device, its supported
configurations, and mount options.

VMware by Broadcom 77
Getting Started with VMware vSphere Container Storage Plug-in

To make existing storage available to a cluster user, you must manually create the storage
device, a PeristentVolume, and a PersistentVolumeClaim. Because the PV and the storage
device already exist, you do not need to specify a storage class name in the PVC specification.
You can use different ways to create a static PV and PVC binding. For example, label matching,
volume size matching, and so on.

Note Creating multiple PVs for the same volume backed by a vSphere virtual disk (First Class
Disk) in the Kubernetes cluster is not supported.

The common use cases of static volume provisioning include the following.

Use an existing storage device

You have provisioned a persistent storage, First Class Disk (FCD), directly in your vCenter
Server, and want to use this FCD in your cluster.

Make retained data available to the cluster

You have provisioned a volume with a reclaimPolicy:retain parameter in the storage class
by using dynamic provisioning. You have removed the PVC, but the PV, the physical storage
in vCenter Server, and the data still exists. You want to access the retained data from an
application in your cluster.

Share persistent storage across namespaces in the same cluster

You have provisioned a PV in a namespace of your cluster. You want to use the same storage
instance for an application pod that is deployed to a different namespace in your cluster.

Share persistent storage across clusters in the same zone

You have provisioned a PV for your cluster. To share the same persistent storage instance
with other clusters in the same zone, you must manually create the PV and matching PVC in
the other cluster.

Note Sharing persistent storage across clusters is available only if the cluster and the storage
instance are located in the same zone.

Statically provision a Single Access (RWO) Volume backed by vSphere Virtual Disk (First Class
Disk)

This procedure provides instructions to provision a persistent volume statically on a


Vanilla Kubernetes cluster. Make sure to mention pv.kubernetes.io/provisioned-by:
csi.vsphere.vmware.com in the PV annotation.

Note Do not specify the key storage.kubernetes.io/csiProvisionerIdentity in


csi.volumeAttributes in PV specification. This key indicates dynamically provisioned PVs.

VMware by Broadcom 78
Getting Started with VMware vSphere Container Storage Plug-in

Procedure

1 Define a PVC and a PV.

Use the following YAML file as an example.

apiVersion: v1
kind: PersistentVolume
metadata:
name: static-vanilla-rwo-filesystem-pv
annotations:
pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: example-vanilla-rwo-filesystem-sc
claimRef:
namespace: default
name: static-vanilla-rwo-filesystem-pvc
csi:
driver: csi.vsphere.vmware.com
fsType: ext4 # Change fstype to xfs or ntfs based on the requirement.
volumeAttributes:
type: "vSphere CNS Block Volume"
volumeHandle: 0c75d40e-7576-4fe7-8aaa-a92946e2805d # First Class Disk (Improved
Virtual Disk) ID
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: static-vanilla-rwo-filesystem-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: example-vanilla-rwo-filesystem-sc
volumeName: static-vanilla-rwo-filesystem-pv
---

2 Import the PV and PVC into a native Kubernetes cluster.

kubectl create -f static.yaml

3 Verify that the PVC you imported has been created and the PersistentVolume is attached to
it.

$ kubectl describe pvc static-pvc-name


Name: static-pvc-name
Namespace: default

VMware by Broadcom 79
Getting Started with VMware vSphere Container Storage Plug-in

StorageClass:
Status: Bound
Volume: static-pv-name
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 2Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: <none>
Events: <none>

If the operation is successful, the Status section displays Bound and the Volume field is
populated.

4 Verify that the PV was successfully attached to the PVC.

$ kubectl describe pv static-pv-name


Name: static-pv-name
Labels: fcd-id=0c75d40e-7576-4fe7-8aaa-a92946e2805d
Annotations: pv.kubernetes.io/bound-by-controller: yes
pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Bound
Claim: default/static-pvc-name
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 2Gi
Node Affinity: <none>
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: csi.vsphere.vmware.com
VolumeHandle: 0c75d40e-7576-4fe7-8aaa-a92946e2805d
ReadOnly: false
VolumeAttributes: type=vSphere CNS Block Volume
Events: <none>

If the operation is successful, the PV shows up in the output. You can also see that the
VolumeHandle key is populated. Status shows Bound. You can also see that Claim is set to
static-pvc-name.

Use XFS File System with vSphere Container Storage Plug-in


vSphere Container Storage Plug-in supports XFS file system for PVCs with ReadWriteOnce access
mode and Filesystem volume mode.

The following example explains how XFS file system is used to mount a volume inside the mongo
database application.

VMware by Broadcom 80
Getting Started with VMware vSphere Container Storage Plug-in

Prerequisites

Specify XFS as the csi.storage.k8s.io/fstype under the storage class to use XFS file system to
format and mount volume inside the container.

Note XFS file system support for vSphere Container Storage Plug-in is validated using Ubuntu
20.04.2 LTS with kernel version 5.4.0-66-generic. However, vSphere Container Storage Plug-in is
currently not compatible with the XFS file system on CentOS 7, and Red Hat Enterprise Linux 7
due to issues with xfsprogs 5.8.0-1.ph4 and respective kernels.

Procedure

1 Create a Storage Class.

a Use the following manifest as mongo-sc.yaml.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-mongo-sc
annotations:
storageclass.kubernetes.io/is-default-class: "true" # Optional
provisioner: csi.vsphere.vmware.com
allowVolumeExpansion: true # Optional: only applicable to vSphere 7.0U1 and above
parameters:
storagepolicyname: "vSAN Default Storage Policy" # Optional Parameter
csi.storage.k8s.io/fstype: "xfs" # Optional Parameter

b Create a Storage Class.

kubectl create -f mongo-sc.yaml

2 Create PVC.

a Use the following manifest as mongo-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-mongo-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: example-mongo-sc

b Create PVC.

kubectl create -f mongo-pvc.yaml

VMware by Broadcom 81
Getting Started with VMware vSphere Container Storage Plug-in

3 Create mongodb secrets.

a Use the following manifest as mongo-secrets.yaml.

apiVersion: v1
data:
password: cGFzc3dvcmQxMjM= #password123
username: YWRtaW51c2Vy #adminuser
kind: Secret
metadata:
creationTimestamp:null
name: mongo-creds

b Create the secret.

kubectl create -f mongo-secrets.yaml

VMware by Broadcom 82
Getting Started with VMware vSphere Container Storage Plug-in

4 Deploy mongodb.

a Use the the following manifest as mongo-deployment.yaml.

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mongo
name: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
strategy: {}
template:
metadata:
labels:
app: mongo
spec:
containers:
- image: mongo
name: mongo
args: ["--dbpath","/data/db"]
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongo-creds
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-creds
key: password
volumeMounts:
- name: "mongo-data-dir"
mountPath: "/data/db"

VMware by Broadcom 83
Getting Started with VMware vSphere Container Storage Plug-in

volumes:
- name: "mongo-data-dir"
persistentVolumeClaim:
claimName: "example-mongo-data"

b Create the deployment.

kubectl create -f mongo-deployment.yaml

c Verify if the file system used to mount volume inside the mongodb pod is XFS using the
following command.

kubectl exec deployment/mongo -it -- /bin/bash


# mount | grep xfs
/dev/sdb on /data/db type xfs
(rw,relatime,nouuid,attr2,inode64,logbufs=8,logbsize=32k,noquota)

5 Create mongodb node port service.

a Use the following manifest as mongo-nodeport-svc.yaml.

apiVersion: v1
kind: Service
metadata:
labels:
app: mongo
name: mongo-nodeport-svc
spec:
ports:
- port:27017
protocol: TCP
targetPort:27017
nodePort:32000
selector:
app: mongo
type: NodePort

b Create the service.

kubectl create -f mongo-nodeport-svc.yaml

c To connect to mongodb outside the kubernetes cluster, use the worker node IP address or
a load balancer address.

mongo --host <ip> --port <port of nodeport svc> -u adminuser -p password123

VMware by Broadcom 84
Getting Started with VMware vSphere Container Storage Plug-in

6 Access mongodb.

a Display the list of database.

test> show dbs


admin 100.00 KiB
config 12.00 KiB
local 72.00 KiB

b Access a particular database.

test> use db1


switched to db db1

c Display the list of collection inside the db1 database.

db1> show collections

d Insert data from the db1 database.

db1> db.blogs.insertOne({name: "devopscube" })


{
acknowledged: true,
insertedIds: { '0': ObjectId("63da54d31b1b7f11e9cefb35") }
}

e Display data from the db1 database.

db1> db.blogs.find()
[ { _id: ObjectId("63da54d31b1b7f11e9cefb35"), name: 'devopscube' } ]

Using Raw Block Volumes with vSphere Container Storage


Plug-in
vSphere Container Storage Plug-in supports block based volumes, also called raw block volumes.
You can use this functionality to expose a persistent volume inside a container as a block device
rather than a mounted file system.

For information about raw block volume support in Kubernetes, see Raw Block Volume Support.

Certain applications require a direct access to a block device. When using a raw block device
without a file system, Kubernetes can provide a better support to high-performance applications
that are capable of consuming and manipulating block storage for their needs. Such applications
as MongoDB and Cassandra that require consistent I/O performance and low latency can benefit
from the raw block volumes technology and organize their data directly on the underlying
storage.

VMware by Broadcom 85
Getting Started with VMware vSphere Container Storage Plug-in

Requirements
When you use raw block volumes with vSphere Container Storage Plug-in, follow these
guidelines and requirements.

n Use vSphere Container Storage Plug-in version 3 or later.

n Use only single-access ReadWriteOnce raw block volumes. vSphere Container Storage Plug-in
does not support raw block volume that use the ReadWriteMany access mode.

Create a Raw Block PVC


Follow this procedure to create a new raw block PVC.

Procedure

1 Create a storage class.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-raw-block-sc
provisioner: csi.vsphere.vmware.com

2 Create a raw block PersistentVolumeClaim.

You must specify volumeMode as Block and accessModes as ReadWriteOnce.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-raw-block-pvc
spec:
volumeMode: Block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: example-raw-block-sc

Use a Raw Block PVC


When you use the PVC in a pod definition, you must specify the device path for the block device
rather than the mount path for the file system.

Procedure

u Use the following example.

apiVersion: v1
kind: Pod
metadata:
name: example-raw-block-pod

VMware by Broadcom 86
Getting Started with VMware vSphere Container Storage Plug-in

spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox:1.24
command: ["/bin/sh", "-c", "while true ; do sleep 2 ; done"]
volumeDevices:
- devicePath: /dev/xvda
name: data
restartPolicy: Never
volumes:
- name: data
persistentVolumeClaim:
claimName: example-raw-block-pvc

Provisioning File Volumes with vSphere Container Storage


Plug-in
vSphere Container Storage Plug-in V2.0 and above for native Kubernetes clusters supports
file volumes backed by vSAN file shares. File volumes can be created statically or dynamically
and mounted by stateful containerized applications. When using file volumes with the vSphere
Container Storage Plug-in, you can reference the same shared data among multiple pods spread
across different clusters. This is a mandatory requirement for applications that need shareability.

Note File volumes do not work in conjunction with functionalities such as extend volume and
encryption.

Requirements for File Volumes with vSphere Container Storage


Plug-in
Before provisioning file volumes, make sure that your vSphere environment is appropriately
configured.

n Verify that your vSphere environment meets necessary requirements, see Compatibility
Matrices for vSphere Container Storage Plug-in, Supported Kubernetes Functionality, and
Configuration Maximums for vSphere Container Storage Plug-in.

n Enable and configure the file service in your vSAN cluster configuration to create file share
volumes, configure the required file service domains, IP pools, network. For more information,
see vSAN File Service.

n Establish a dedicated file share network connecting all the Kubernetes nodes. Ensure this
network is routable to the vSAN File Share network. For more information, see Configuring
Network Access to vSAN File Share.

n Configure the Kubernetes nodes to use the DNS server that was used to configure the file
services in the vSAN cluster. This helps the nodes to resolve the file share access points with
Fully Qualified Domain Name (FQDN) while mounting the file volume to the pod.

VMware by Broadcom 87
Getting Started with VMware vSphere Container Storage Plug-in

You can retrieve the vSAN file service DNS configuration by navigating to the Configure tab
of your vSAN cluster and clicking File Service.

n Configure the Kubernetes secret vsphere-config-secret to specify network permissions


and placement of volumes for your vSAN file shares in your vSphere environment. This
requirement is optional. For more information on vSphere configuration file for file volumes,
see Create a Kubernetes Secret for vSphere Container Storage Plug-in. If these parameters
are not specified, vSphere Container Storage Plug-in specifies how to place the file
volumes in any of your vSAN datastores with file services enabled. In such cases, the file
volumes backed by vSAN file shares using the NFSv4 protocol will assume default network
permissions, such as Read-Write privilege and root access to all the IP ranges.

n Required privilege: Host.Configuration.Storage partition configuration. For information, see


vSphere Roles and Privileges.

VMware by Broadcom 88
Getting Started with VMware vSphere Container Storage Plug-in

Considerations when Using File Volumes with vSphere Container


Storage Plug-in
When you use the file volumes, consider the following:

n If you have file shares shared across more than one clusters in your vCenter Server, deleting
a PVC with the Delete reclaim policy in one cluster might delete the underlying file share. This
action might cause the volume to be unavailable for the rest of the clusters.

n After you finish setting up file services, get started with vSphere CSI file services integration
with your applications. For more information, see a video on Cloud Native Storage and vSAN
File Services Integration. For Storage Class, PersistentVolumeClaim, PersistentVolume and
Pod specs, see Try-out YAMLs.

n vSphere Container Storage Plug-in can mount vSAN file service volumes only with NFS 4.1.
NFS 3 is not supported.

n When you request a RWX volume in Kubernetes, vSAN File Service creates an NFS based file
share of the requested size and appropriate SPBM policy. One vSAN file share is created per
a RWX volume. VMware supports 100 shares per vSAN File Service cluster, which means you
can have no more than 100 RWX volumes.

Dynamically Provision File Volumes with vSphere Container Storage


Plug-in
vSphere Container Storage Plug-in supports dynamic provisioning of file volumes in native
Kubernetes clusters.

Prerequisites

n Set up your environment with vSAN File Service enabled. See Requirements for File Volumes
with vSphere Container Storage Plug-in.

n For additional information about integrating vSphere Container Storage Plug-in file services
with your applications, see Cloud Native Storage and vSAN File Services Integration.

n Use sample YAML files for Storage Class, PersistentVolumeClaim, PersistentVolume, and Pod
specs. For more information, see the following links.

n File Share Volume.

n Block Volume.

n Raw Block Volume.

Procedure

1 Download a sample storage class from the GitHub repository.

For more information, see https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/


master/example/vanilla-k8s-RWM-filesystem-volumes/example-sc.yaml.

VMware by Broadcom 89
Getting Started with VMware vSphere Container Storage Plug-in

2 Create a file volume PVC YAML file.

Set accessModes to either ReadWriteMany or ReadOnlyMany based on your requirement.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-vanilla-file-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: example-vanilla-file-sc

3 Deploy the PVC.

kubectl apply -f example-pvc.yaml

Optionally, you can describe the corresponding PV after the PVC is bound.
The output is similar to the following:

Name: pvc-45cea491-8399-11ea-883a-005056b61591
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
Finalizers: [kubernetes.io/pv-protection]
StorageClass: example-vanilla-file-sc
Status: Bound
Claim: default/example-vanilla-file-pvc
Reclaim Policy: Delete
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: csi.vsphere.vmware.com
VolumeHandle: file:53bf6fb7-fe9f-4bf8-9fd8-7a589bf77760
ReadOnly: false
VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1587430348006-8081-
csi.vsphere.vmware.com
type=vSphere CNS File Volume
Events: <none>

The VolumeHandle associated with the PV contains the file: prefix, which indicates that it is a
file volume.

VMware by Broadcom 90
Getting Started with VMware vSphere Container Storage Plug-in

4 Create a pod with Read-Write acess or Read-Only access.

Option Description

Create a pod with Read-Write access apiVersion: v1


referencing PVC from the example kind: Pod
metadata:
in step 3.
name: example-vanilla-file-pod1
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox:1.24
command: ["/bin/sh", "-c", "echo 'Hello! This is
Pod1' >> /mnt/volume1/index.html && while true ; do sleep
2 ; done"]
volumeMounts:
- name: test-volume
mountPath: /mnt/volume1
restartPolicy: Never
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: example-vanilla-file-pvc

To read the same file share from multiple pods, specify the PVC associated
with the file share in the ClaimName in all pod specifications.

Create a pod with Read-Only access Specify readOnly as true in the persistentVolumeClaim section. Setting just
to the PVC. the accessModes to ReadOnlyMany in the PVC specification is not sufficient to
make the PVC Read-Only to the pods.

apiVersion: v1
kind: Pod
metadata:
name: example-vanilla-file-pod2
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox:1.24
command: ["/bin/sh", "-c", "while true ; do sleep
2 ; done"]
volumeMounts:
- name: test-volume
mountPath: /mnt/volume1
restartPolicy: Never
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: example-vanilla-file-pvc
readOnly: true

If you access this pod and try to create a file in the mountPath, which
is /mnt/volume1, you get an error.

$ kubectl exec -it example-vanilla-file-pod2 -c test-


container -- /bin/sh
/ # cd /mnt/volume1/
/mnt/volume1 # touch abc.txt
touch: abc.txt: Read-only file system

VMware by Broadcom 91
Getting Started with VMware vSphere Container Storage Plug-in

Statically Provision File Volumes with vSphere Container Storage


Plug-in
If you have an existing persistent storage file volume in your vCenter Server environment, use
static provisioning with vSphere Container Storage Plug-in to make the storage instance available
to your native Kubernetes cluster.

vSphere Container Storage Plug-in supports only vSAN file service volumes that are created with
a hard quota limit. File service volumes that do not have a hard quota limit are not supported.

Procedure

u Define a PVC and a PV.

apiVersion: v1
kind: PersistentVolume
metadata:
name: static-file-share-pv-name
annotations:
pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
labels:
"static-pv-label-key": "static-pv-label-value"
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
csi:
driver: "csi.vsphere.vmware.com"
volumeAttributes:
type: "vSphere CNS File Volume"
"volumeHandle": "file:236b3e6b-cfb0-4b73-a271-2591b2f31b4c"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: static-file-share-pvc-name
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
static-pv-label-key: static-pv-label-value
storageClassName: ""

n The labels key-value pair static-pv-label-key: static-pv-label-value used in PV


metadata and PVC selector aid in matching the PVC to the PV during static provisioning.

VMware by Broadcom 92
Getting Started with VMware vSphere Container Storage Plug-in

n Ensure to retain the file: prefix of the vSAN file share while filling up the volumeHandle
field in the PV specification.

Note For file volumes, CNS supports multiple PVs that refer to the same file-share volume.

Topology-Aware Volume Provisioning


You can design your vSphere environment in such a way that certain datastores become
accessible only from a subset of nodes in the vSphere cluster, based on availability zones. You
can segment the cluster into racks, regions, or zones, or use some other type of grouping. When
topology is enabled in the cluster, you can use vSphere Container Storage Plug-in to deploy a
Kubernetes workload to a specific region or zone defined in the topology.

In addition, you can use the volumeBindingMode parameter in the StorageClass to specify when
the volume should be created and bound to the PVC request. vSphere Container Storage Plug-in
supports two volume binding modes that Kubernetes provides.

Immediate

This is the default volume binding mode. The mode indicates that volume binding and
dynamic provisioning occur immediately after the PersistentVolumeClaim is created. To
deploy workloads with Immediate binding mode in topology-aware environment, you can
specify zone parameters in the StorageClass.

WaitForFirstConsumer

This mode delays the creation and binding of a persistent volume for a PVC until a pod that
uses the PVC is created. When you use this mode, you do not need to specify StorageClass
zone parameters because pod policies drive the decision of which zones to use for volume
provisioning.

Before deploying workloads with topology, enable topology in the native Kubernetes cluster in
your vSphere environment. For more information, see Deploy vSphere Container Storage Plug-in
with Topology.

Deploy Workloads with Immediate Mode in a Topology-Aware


Environment
vSphere Container Storage Plug-in supports volume topology and availability zones in vSphere
environment. You can deploy a Kubernetes workload to a specific region or zone defined in the
topology using the default Immediate volume binding mode. The mode indicates that volume
binding and dynamic provisioning occur immediately after the PersistentVolumeClaim is created.

Deploy Workloads with Immediate Mode in a Topology-Aware Environment for


File Volumes
For file volumes, the provisioned Persistent Volume will not have node affinity rules published
on it. This is to allow applications in a given availability zone to be able to access file volumes

VMware by Broadcom 93
Getting Started with VMware vSphere Container Storage Plug-in

in other availability zones. Even if you choose a zone for the file volumes, applications across
different availability zones can still access these file volumes.

To deploy workloads with Immediate binding mode in a topology-aware environment, you must
specify zone parameters in the StorageClass.

Prerequisites

Enable topology in the native Kubernetes cluster in your vSphere environment. For more
information, see Deploy vSphere Container Storage Plug-in with Topology.

Procedure

1 Create a StorageClass with Immediate volume binding mode.

When you do not specify the volume binding mode, it is Immediate by default.
You can define network permissions in the VSPHERE_CSI_CONFIG secret to restrict volume
provisioning only in specific networks. To define network permissions, see Create a
Kubernetes Secret for vSphere Container Storage Plug-in.
You can also specify zone parameters. In the following example, the StorageClass can
provision volumes on either zone-a or zone-b.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-multi-zones-sc
provisioner: csi.vsphere.vmware.com
allowedTopologies:
- matchLabelExpressions:
- key: topology.csi.vmware.com/k8s-region
values:
- region-1
- key: topology.csi.vmware.com/k8s-zone
values:
- zone-a
- zone-b
parameters:
csi.storage.k8s.io/fstype: "nfs4"

2 Create a PersistentVolumeClaim to use the StorageClass.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-multi-zones-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
storageClassName: example-multi-zones-sc

VMware by Broadcom 94
Getting Started with VMware vSphere Container Storage Plug-in

3 Verify whether the PVC is bound.

$ kubectl get pvc example-multi-zones-pvc


NAME STATUS VOLUME CAPACITY
ACCESS MODES STORAGECLASS AGE
example-multi-zones-pvc Bound pvc-012cc523-03f0-45ea-9213-883362436591 100Mi
RWX example-multi-zones-sc 3s

4 Verify that the PV that got provisioned does not have any node affinity rules on it.

Name: pvc-012cc523-03f0-45ea-9213-883362436591
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
Finalizers: [kubernetes.io/pv-protection]
StorageClass: example-multi-zones-sc
Status: Bound
Claim: default/example-multi-zones-pvc
Reclaim Policy: Delete
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 100Mi
Node Affinity: <none>
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: csi.vsphere.vmware.com
FSType: nfs4
VolumeHandle: file:fd60964d-d956-42bf-8fe5-37534dc4861a
ReadOnly: false
VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1704350366618-902-
csi.vsphere.vmware.com
type=vSphere CNS File Volume
Events: <none>

5 Create an application to use the PVC.

apiVersion: v1
kind: Pod
metadata:
name: example-multi-zones-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox:1.24
command: ["/bin/sh", "-c", "echo 'hello' > /mnt/volume1/index.html && chmod
o+rX /mnt /mnt/volume1/index.html && while true ; do sleep 2 ; done"]
volumeMounts:
- name: test-volume
mountPath: /mnt/volume1
restartPolicy: Never
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: example-multi-zones-pvc

VMware by Broadcom 95
Getting Started with VMware vSphere Container Storage Plug-in

Results

The pod may or may not get scheduled in the zone where the volume has been provisioned.

$ kubectl get pods -o wide


NAME READY STATUS RESTARTS AGE IP NODE
NOMINATED NODE READINESS GATES
example-multi-zones-pod 1/1 Running 0 3m53s 10.244.6.34 k8s-node-3
<none> <none>

$ kubectl get node k8s-node-3 --show-labels


NAME STATUS ROLES AGE VERSION LABELS
k8s-node-3 Ready <none> 2d v1.21.1 topology.csi.vmware.com/k8s-
region=region-1,topology.csi.vmware.com/k8s-zone=zone-c

Deploy Workloads with Immediate Mode in a Topology-Aware Environment for


Block Volumes
To deploy workloads with Immediate binding mode in a topology-aware environment, you must
specify zone parameters in the StorageClass.

Prerequisites

Enable topology in the native Kubernetes cluster in your vSphere environment. For more
information, see Deploy vSphere Container Storage Plug-in with Topology.

Procedure

1 Create a StorageClass with Immediate volume binding mode.

When you do not specify the volume binding mode, it is Immediate by default.
You can also specify zone parameters. In the following example, the StorageClass can
provision volumes on either zone-a or zone-b.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-multi-zones-sc
provisioner: csi.vsphere.vmware.com
allowedTopologies:
- matchLabelExpressions:
- key: topology.csi.vmware.com/k8s-region
values:
- region-1
- key: topology.csi.vmware.com/k8s-zone
values:
- zone-a
- zone-b

2 Create a PersistentVolumeClaim to use the StorageClass.

apiVersion: v1
kind: PersistentVolumeClaim

VMware by Broadcom 96
Getting Started with VMware vSphere Container Storage Plug-in

metadata:
name: example-multi-zones-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
storageClassName: example-multi-zones-sc

3 Verify whether the PVC is bound.

$ kubectl get pvc example-multi-zones-pvc


NAME STATUS VOLUME CAPACITY
ACCESS MODES STORAGECLASS AGE
example-multi-zones-pvc Bound pvc-b489e551-9c76-44ca-9434-76c628836748 100Mi
RWO example-multi-zones-sc 3s

4 Verify that the PV node affinity rules include at least one domain within zone-a or zone-b
depending on whether the selected datastore is local or shared across zones.

root@k8s-control-108-1632518174:~# kubectl describe pv pvc-


b489e551-9c76-44ca-9434-76c628836748
Name: pvc-b489e551-9c76-44ca-9434-76c628836748
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
Finalizers: [kubernetes.io/pv-protection]
StorageClass: example-multi-zones-sc
Status: Bound
Claim: default/example-multi-zones-pvc
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 100Mi
Node Affinity:
Required Terms:
Term 0: topology.csi.vmware.com/k8s-zone in [zone-b]
topology.csi.vmware.com/k8s-region in [region-1]
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: csi.vsphere.vmware.com
FSType: ext4
VolumeHandle: db13a347-0fd5-4b8a-894c-23cf84ab2973
ReadOnly: false
VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1634758806527-8081-
csi.vsphere.vmware.com
type=vSphere CNS Block Volume
Events: <none>

5 Create an application to use the PVC.

apiVersion: v1
kind: Pod

VMware by Broadcom 97
Getting Started with VMware vSphere Container Storage Plug-in

metadata:
name: example-multi-zones-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox:1.24
command: ["/bin/sh", "-c", "echo 'hello' > /mnt/volume1/index.html && chmod
o+rX /mnt /mnt/volume1/index.html && while true ; do sleep 2 ; done"]
volumeMounts:
- name: test-volume
mountPath: /mnt/volume1
restartPolicy: Never
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: example-multi-zones-pvc

You can notice that the pod is scheduled in to the zone where volume has been provisioned.
In this example, it is zone-b.

$ kubectl get pods -o wide


NAME READY STATUS RESTARTS AGE IP NODE
NOMINATED NODE READINESS GATES
example-multi-zones-pod 1/1 Running 0 3m53s 10.244.5.34 k8s-node-2
<none> <none>

$ kubectl get node k8s-node-2 --show-labels


NAME STATUS ROLES AGE VERSION LABELS
k8s-node-2 Ready <none> 2d v1.21.1 topology.csi.vmware.com/k8s-
region=region-1,topology.csi.vmware.com/k8s-zone=zone-b

Example: StorageClass with Additional Restrictive Parameters


You can use additional parameters, such as storagepolicyname, in the StorageClass to further
restrict the selection of a datastore for volume provisioning. For example, if you have a shared
datastore datastore-AB accessible to zone-a and zone-b, create a storage policy that points
to the datastore-AB, and then mention this storage policy as a parameter in the storage class.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-multi-zones-sc
provisioner: csi.vsphere.vmware.com
parameters:
storagepolicyname: "shared datastore zones A and B"
allowedTopologies:
- matchLabelExpressions:
- key: topology.csi.vmware.com/k8s-region
values:
- region-1

VMware by Broadcom 98
Getting Started with VMware vSphere Container Storage Plug-in

- key: topology.csi.vmware.com/k8s-zone
values:
- zone-a
- zone-b

Deploy Workloads with WaitForFirstConsumer Mode in a Topology-


Aware Environment
vSphere Container Storage Plug-in supports topology-aware volume provisioning with
WaitForFirstConsumer. With WaitForFirstConsumer topology-aware provisioning, Kubernetes
can make intelligent decisions and find the best place to dynamically provision a volume for a
pod. In multi-zone clusters, you can provision volumes in an appropriate zone that can run your
pod. You can deploy and scale your stateful workloads across failure domains to provide high
availability and fault tolerance.

Deploy Workloads with WaitForFirstConsumer Mode in a Topology-Aware


Environment for File Volumes
This section provides information about deploying workloads with WaitForFirstConsumer mode
in a topology-aware environment for file volumes.

Prerequisites

Enable topology in the native Kubernetes cluster in your vSphere environment. For more
information, see Deploy vSphere Container Storage Plug-in with Topology

Procedure

1 Create a StorageClass with the volumeBindingMode parameter set to WaitForFirstConsumer.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: topology-aware-file-volume
provisioner: csi.vsphere.vmware.com
volumeBindingMode: WaitForFirstConsumer

2 Create an application to use the StorageClass created previously.

Instead of creating a volume immediately, the WaitForFirstConsumer setting instructs the


volume provisioner to wait until a pod using the associated PVC runs through scheduling. In
contrast with the Immediate volume binding mode, when the WaitForFirstConsumer setting is
used, the Kubernetes scheduler drives the decision of which failure domain to use for volume
provisioning using the pod policies.

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
replicas: 2

VMware by Broadcom 99
Getting Started with VMware vSphere Container Storage Plug-in

selector:
matchLabels:
app: nginx
serviceName: nginx
template:
metadata:
labels:
app: nginx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.csi.vmware.com/k8s-zone
operator: In
values:
- zone-a
- zone-b
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: topology.csi.vmware.com/k8s-zone
containers:
- name: nginx
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
- name: logs
mountPath: /logs
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: topology-aware-file-volume
resources:
requests:
storage: 2Gi
- metadata:
name: logs
spec:
accessModes: [ "ReadWriteMany" ]

VMware by Broadcom 100


Getting Started with VMware vSphere Container Storage Plug-in

storageClassName: topology-aware-file-volume
resources:
requests:
storage: 1Gi

3 Verify that the statefulset is in the Running state and check that the pods are evenly
distributed among the zone-a and zone-b.

$ kubectl get statefulset


NAME READY AGE
web 2/2 3m51s

$ kubectl get pods -o wide


NAME READY STATUS RESTARTS AGE IP NODE
NOMINATED NODE READINESS GATES
web-0 1/1 Running 0 4m40s 10.244.3.21 k8s-node-2
<none> <none>
web-1 1/1 Running 0 4m12s 10.244.4.25 k8s-node-1
<none> <none>

$ kubectl get node k8s-node-1 k8s-node-2 --show-labels


NAME STATUS ROLES AGE VERSION LABELS
k8s-node-1 Ready <none> 2d v1.21.1 topology.csi.vmware.com/k8s-
region=region-1,topology.csi.vmware.com/k8s-zone=zone-a
k8s-node-2 Ready <none> 2d v1.21.1 topology.csi.vmware.com/k8s-
region=region-1,topology.csi.vmware.com/k8s-zone=zone-b

Note that the node affinity rules are not published on the PV.

Deploy Workloads with WaitForFirstConsumer Mode in a Topology-Aware


Environment for Block Volumes
This section provides information about deploying workloads with WaitForFirstConsumer mode
in a topology-aware environment for block volumes.

Prerequisites

Enable topology in the native Kubernetes cluster in your vSphere environment. For more
information, see Deploy vSphere Container Storage Plug-in with Topology.

Procedure

1 Create a StorageClass with the volumeBindingMode parameter set to WaitForFirstConsumer.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: topology-aware-standard
provisioner: csi.vsphere.vmware.com
volumeBindingMode: WaitForFirstConsumer

VMware by Broadcom 101


Getting Started with VMware vSphere Container Storage Plug-in

2 Create an application to use the StorageClass created previously.

Instead of creating a volume immediately, the WaitForFirstConsumer setting instructs the


volume provisioner to wait until a pod using the associated PVC runs through scheduling. In
contrast with the Immediate volume binding mode, when the WaitForFirstConsumer setting is
used, the Kubernetes scheduler drives the decision of which failure domain to use for volume
provisioning using the pod policies.

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
replicas: 2
selector:
matchLabels:
app: nginx
serviceName: nginx
template:
metadata:
labels:
app: nginx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.csi.vmware.com/k8s-zone
operator: In
values:
- zone-a
- zone-b
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: topology.csi.vmware.com/k8s-zone
containers:
- name: nginx
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
- name: logs
mountPath: /logs
volumeClaimTemplates:

VMware by Broadcom 102


Getting Started with VMware vSphere Container Storage Plug-in

- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: topology-aware-standard
resources:
requests:
storage: 2Gi
- metadata:
name: logs
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: topology-aware-standard
resources:
requests:
storage: 1Gi

3 Verify that the statefulset is in the Running state and check that the pods are evenly
distributed among the zone-a and zone-b.

$ kubectl get statefulset


NAME READY AGE
web 2/2 3m51s

$ kubectl get pods -o wide


NAME READY STATUS RESTARTS AGE IP NODE
NOMINATED NODE READINESS GATES
web-0 1/1 Running 0 4m40s 10.244.3.21 k8s-node-2
<none> <none>
web-1 1/1 Running 0 4m12s 10.244.4.25 k8s-node-1
<none> <none>

$ kubectl get node k8s-node-1 k8s-node-2 --show-labels


NAME STATUS ROLES AGE VERSION LABELS
k8s-node-1 Ready <none> 2d v1.21.1 topology.csi.vmware.com/k8s-
region=region-1,topology.csi.vmware.com/k8s-zone=zone-a
k8s-node-2 Ready <none> 2d v1.21.1 topology.csi.vmware.com/k8s-
region=region-1,topology.csi.vmware.com/k8s-zone=zone-b

Notice that the PV node affinity rules include at least one domain within zone-a or zone-b
depending on whether the selected datastore is local or shared across zones.

$ kubectl get pv -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.claimRef.name}


{"\t"}{.spec.nodeAffinity}{"\n"}{end}'
pvc-2253dc52-a9ed-11e9-b26e-005056a04307
www-web-0 map[required:map[nodeSelectorTerms:[map[matchExpressions:
[map[key:topology.csi.vmware.com/k8s-region operator:In values:[region-1]] map[operator:In
values:[zone-b] key:topology.csi.vmware.com/k8s-zone]]]]]]
pvc-22575240-a9ed-11e9-b26e-005056a04307
logs-web-0 map[required:map[nodeSelectorTerms:[map[matchExpressions:
[map[key:topology.csi.vmware.com/k8s-zone operator:In values:[zone-b]]
map[key:topology.csi.vmware.com/k8s-region operator:In values:[region-1]]]]]]]
pvc-3c963150-a9ed-11e9-b26e-005056a04307
www-web-1 map[required:map[nodeSelectorTerms:[map[matchExpressions:

VMware by Broadcom 103


Getting Started with VMware vSphere Container Storage Plug-in

[map[key:topology.csi.vmware.com/k8s-zone operator:In values:[zone-a]] map[operator:In


values:[region-1] key:topology.csi.vmware.com/k8s-region]]]]]]
pvc-3c98978f-a9ed-11e9-b26e-005056a04307
logs-web-1 map[required:map[nodeSelectorTerms:[map[matchExpressions:
[map[key:topology.csi.vmware.com/k8s-zone operator:In values:[zone-a]]
map[key:topology.csi.vmware.com/k8s-region operator:In values:[region-1]]]]]]]

Deploy Workloads on a Preferential Datastore in a Topology-Aware


Environment
vSphere Container Storage Plug-in supports volume provisioning on a preferential datastore.
Preferential datastores are only supported for block volumes.

You can use this functionality in an environment with a replicated datastore that is shared across
two topology domains, or sites. The datastore is active in one of the topology domains, and is
passive in the other.

When a site failure occurs, the active datastore on the failed site becomes passive, and the
passive datastore on the other site becomes active.

In the following diagram, the DS-1 datastore is active in Site 1 and passive in Site 2. The DS-2
datastore is active in Site 2 and passive in Site 1.

Both datastores, DS-1 and DS-2, are accessible to all nodes in both sites. A typical volume
provisioning request for Site 1 would provision a volume on either DS-1 or DS-2.

You can set preference to a particular datastore for a site, so that volume provisioning is limited
only to the active datastore.

Site 1 Datacenter-1 Site 2 Datacenter-2

Control Plane Control Plane Worker Control Plane Worker Worker


VM 1 VM 2 VM Stretched Kubernetes cluster VM 3 VM VM

Stretched vCenter Server cluster


Server Server Server Server Server Server

Provisioned PVs Provisioned PVs

Storage replication/mirroring

DS-1 (active) DS-1 (passive)

Storage replication/mirroring

DS-2 (passive) DS-2 (active)

In this example, the DS-1 datastore is set as a preferred datastore for Site 1 and DS-2 datastore is
a preferred datastore for Site 2.

VMware by Broadcom 104


Getting Started with VMware vSphere Container Storage Plug-in

Prerequisites

n Ensure that the vSphere Container Storage Plug-in version is 2.6.1 or later.

n Preferential datastore requires a topology aware Kubernetes deployment. See Deploying


vSphere Container Storage Plug-in with Topology.

Procedure

1 As a vSphere administrator, create a category with the following name:

cns.vmware.topology-preferred-datastores.

2 Create tags under this category that match the tags that you used for the topology domain
name. For example,

n For Site 1, if the topology domain name is site-1, create a site-1 tag under the
cns.vmware.topology-preferred-datastores category.

n For Site 2, if the topology domain name is site-2, create a site-2 tag under the
cns.vmware.topology-preferred-datastores category.

3 Assign the site-1 and site-2 tags created under the cns.vmware.topology-preferred-
datastores category to the DS-1 and DS-2 datastore correspondingly.

During a datastore failover scenario, workload node VMs created on a specific datastore
become inaccessible. The API server objects for these nodes are deleted from the
Kubernetes cluster by CPI. To fix this issue, it is recommended to install vSphere Cloud
Provider v1.24.2 or later, if you are using Kubernetes version 1.24.

Results

After you set the preference on the datastores, any volume provisioning request for site-1
ensures that the volume is allocated on the DS-1 datastore.

What to do next

The CSI driver periodically acknowledges any modifications to preferred datastore configurations
every 5 minutes. If required, you can expedite these changes by restarting the vSphere CSI
controller pods.

To customize this interval, you can adjust or define the csi-fetch-preferred-datastores-


intervalinmin setting within the global section of the vSphere config secret.

Provisioning Volumes in an Environment with Multiple


vCenter Server Instances
vSphere Container Storage Plug-in supports volume provisioning in an environment with multiple
vCenter Server instances. As a result, you can provision volumes in a native Kubernetes cluster
that spans across multiple vCenter Server instances.

VMware by Broadcom 105


Getting Started with VMware vSphere Container Storage Plug-in

In the following diagram, the Kubernetes cluster spans across three vCenter Server instances
that represent different availability zones. Kubernetes control plane nodes and worker nodes are
distributed across these three zones. A volume provisioning request with zone1, specified in a
topology requirement, provisions the volume on vSAN1 or VMFS1 datastore connected to VC1.

Zone1 Zone2 Zone3

VC1 VC2 VC3

Datacenter1 Datacenter2 Datacenter3

Cluster1 Cluster2 Cluster3

ESXi1 ESXi4 ESXi7


K8s-master1 K8s-master2 K8s-master3

ESXi2 ESXi5 ESXi8


K8s-worker1 K8s-worker3 K8s-worker5

ESXi3 ESXi6 ESXi9


K8s-worker2 K8s-worker4 K8s-worker6

vSAN1 nfs1 vmfs1 vSAN2 nfs2 vmfs2 vSAN3 nfs3 vmfs3

Requirements and Limitations


Before deploying a workload in the configuration with multiple vCenter Server instances, deploy
vSphere Container Storage Plug-in with topology.

For more information, see Deploying vSphere Container Storage Plug-in with Multiple vCenter
Server Instances.

Note vSphere Container Storage Plug-in does not support datastores shared across multiple
vCenter Server instances.

Dynamic Volume Provisioning


To provision volumes and workloads dynamically in the environment with multiple vCenter Server
instances, follow the same steps required for provisioning topology-aware volumes. For more
information on topology-aware volume provisioning, see Topology-Aware Volume Provisioning.

The following additional guidelines apply.

n In the storage class, specify topology segments, such as regions or zones, for only a single
vCenter Server.

VMware by Broadcom 106


Getting Started with VMware vSphere Container Storage Plug-in

If topology segments in the storage class span across more than one vCenter Server,
provisioning of a corresponding volume fails.

The following is an example of a storage class used for dynamic provisioning of a block or a
file volume. Provisioning is done from one of the three vCenter Server instances shown in the
diagram. In this example, the storage class indicates zone-1, which means that only VC-1 can
initiate the provisioning.

kind: StorageClass

apiVersion: storage.k8s.io/v1

metadata:

name: example-multi-zones-sc

provisioner: csi.vsphere.vmware.com

allowedTopologies:

- matchLabelExpressions:

- key: topology.csi.vmware.com/k8s-zone

values:

- zone-1

Static Volume Provisioning


Static Volume Provisioning for File Volumes

To provision file volumes statically in an environment with multiple vCenter Server instances,
follow the same steps required for provisioning static file volumes in a single vCenter Server.
See Statically Provision File Volumes with vSphere Container Storage Plug-in.

The CSI driver can identify the vCenter Server where the backing file share is located and
create a CNS volume for the same.

Static Volume Provisioning for Block Volumes

To provision block volumes statically in the environment with multiple vCenter Server
instances, follow the same steps required for provisioning topology-aware volumes. For
more information on topology-aware volume provisioning, see Topology-Aware Volume
Provisioning.

Specify affinity rules for a PersistentVolume object in the nodeAffinity section. The affinity
rules indicate the topology segment and vCenter Server, to which the volume belongs. If you
do not specify the affinity rules the in PersistentVolume object, volume registration fails.

VMware by Broadcom 107


Getting Started with VMware vSphere Container Storage Plug-in

Use the following example, to provision a PersistentVolume object on VC-2 in the


environment represented by the diagram.

apiVersion: v1

kind: PersistentVolume

metadata:

name: static-pv-name

annotations:

pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com

labels:

static-pv-label-key: static-pv-label-value # This label is optional, it is used as a


selector to bind with volume claim. This can be any unique key-value to identify PV.

spec:

capacity:

storage: 2Gi

accessModes:

- ReadWriteOnce

persistentVolumeReclaimPolicy: Delete

claimRef:

namespace: default

name: static-pvc-name

csi:

driver: "csi.vsphere.vmware.com"

volumeAttributes:

type: "vSphere CNS Block Volume"

volumeHandle: "0c75d40e-7576-4fe7-8aaa-a92946e2805d" # First Class Disk (Improved


Virtual Disk) ID

nodeAffinity:

required:

nodeSelectorTerms:

VMware by Broadcom 108


Getting Started with VMware vSphere Container Storage Plug-in

- matchExpressions:

- key: topology.csi.vmware.com/k8s-zone

operator: In

values:

- zone-2

Expanding a Volume with vSphere Container Storage Plug-


in
vSphere Container Storage Plug-in supports volume expansion for block volumes that are
created dynamically or statically.

Kubernetes supports offline and online modes of volume expansion. When the PVC is used by
a pod and is mounted on a node, the volume expansion operation is categorized as online. In
all other cases, it is an offline expansion. For information about vSphere versions that support
volume expansion, see Supported Kubernetes Functionality.

Expanding volume is not supported when volume snapshot is present, and when a Node VM
snapshot is present with the volume attached to it.

The following considerations apply when you use volume expansion:

Feature Gate

Expand CSI Volumes feature is enabled by default since it was promoted to beta in
Kubernetes 1.16. For Kubernetes releases before 1.16, enable Expand CSI Volumes feature
gate to support volume expansion in vSphere Container Storage Plug-in.

Sidecar Container

An external resizer sidecar container implements the logic of watching the Kubernetes API for
PVC edits, issuing the ControllerExpandVolume RPC call against a CSI endpoint, and updating
the PersistentVolume object to reflect the new size. This container is already deployed as
part of the vsphere-csi-controller pod.

Volume Resize Limitations

n Kubernetes does not support expansion of a volume backed up by Statefulset.

n vSphere Container Storage Plug-in does not support expansion of a volume backed up by
vSAN file service.

Expand a Volume in Online Mode


vSphere Container Storage Plug-in supports volume expansion for block volumes in online mode.

VMware by Broadcom 109


Getting Started with VMware vSphere Container Storage Plug-in

Procedure

1 Create a PVC and a pod to use this PVC.

a Create a new StorageClass or edit the existing StorageClass to set allowVolumeExpansion


to true.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-block-sc
provisioner: csi.vsphere.vmware.com
allowVolumeExpansion: true

b Create or edit a PVC that references the StorageClass.

Ensure that the PVC is in bound state. If you are using a statically provisioned PVC,
ensure that the PVC and the PV specifications have the storageClassName parameter
pointing to the StorageClass with the allowVolumeExpansion set to true.

c Deploy the PVC and create a pod to use this PVC.

$ kubectl get pvc,pv,pod


NAME STATUS
VOLUME CAPACITY ACCESS MODES
STORAGECLASS AGE
persistentvolumeclaim/example-block-pvc Bound pvc-84c89bf9-8455-4633-a8c8-
cd623e155dbd 1Gi RWO example-block-sc 8m5s

NAME CAPACITY ACCESS MODES


RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON
AGE
persistentvolume/pvc-84c89bf9-8455-4633-a8c8-cd623e155dbd 1Gi RWO
Delete Bound default/example-block-pvc example-block-sc
7m59s

NAME READY STATUS RESTARTS AGE


pod/example-block-pod 1/1 Running 0 7m1s

VMware by Broadcom 110


Getting Started with VMware vSphere Container Storage Plug-in

2 Expand the PVC in online mode.

a Patch the PVC to increase the storage size, in this example, to 2Gi.

$ kubectl patch pvc example-block-pvc -p '{"spec": {"resources": {"requests":


{"storage": "2Gi"}}}}'
persistentvolumeclaim/example-block-pvc patched

This action triggers an expansion in the volume associated with the PVC in vSphere Cloud
Native Storage.

b Describe the PVC.

The output looks similar to the following. The PVC shows the increase in size after the
volume underneath is expanded.

$ kubectl describe pvc example-block-pvc


Name: example-block-pvc
Namespace: default
StorageClass: example-block-sc
Status: Bound
Volume: pvc-84c89bf9-8455-4633-a8c8-cd623e155dbd
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: csi.vsphere.vmware.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 2Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: example-block-pod
Events:
Type Reason Age
From
Message
---- ------ ----
----
-------
Normal ExternalProvisioning 19m persistentvolume-
controller
waiting for a volume to be created, either by external provisioner
"csi.vsphere.vmware.com" or manually created by system administrator
Normal Provisioning 19m csi.vsphere.vmware.com_vsphere-csi-
controller-5d8c5c7d6-9r9kv_7adc4efc-10a6-4615-b90b-790032cc4569 External provisioner
is provisioning volume for claim "default/example-block-pvc"
Normal ProvisioningSucceeded 19m csi.vsphere.vmware.com_vsphere-csi-
controller-5d8c5c7d6-9r9kv_7adc4efc-10a6-4615-b90b-790032cc4569 Successfully
provisioned volume pvc-84c89bf9-8455-4633-a8c8-cd623e155dbd
Warning ExternalExpanding 75s
volume_expand
Ignoring the PVC: didn't find a plugin capable of expanding the volume;
waiting for an external controller to process this PVC.
Normal Resizing 75s external-resizer
csi.vsphere.vmware.com
External resizer is resizing volume pvc-84c89bf9-8455-4633-a8c8-cd623e155dbd

VMware by Broadcom 111


Getting Started with VMware vSphere Container Storage Plug-in

Normal FileSystemResizeRequired 69s external-resizer


csi.vsphere.vmware.com
Require file system resize of volume on node
Normal FileSystemResizeSuccessful 6s kubelet, k8s-
node-072
MountVolume.NodeExpandVolume succeeded for volume "pvc-84c89bf9-8455-4633-a8c8-
cd623e155dbd"

The PVC goes through events Resizing to FileSystemResizeRequired to


FileSystemResizeSuccessful.

Note

c Display the PV to verify the expanded size.

$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY
STATUS CLAIM STORAGECLASS REASON AGE
pvc-84c89bf9-8455-4633-a8c8-cd623e155dbd 2Gi RWO Delete
Bound default/example-block-pvc example-block-sc 25m

Expand a Volume in Offline Mode


vSphere Container Storage Plug-in supports volume expansion for block volumes in offline mode.

VMware by Broadcom 112


Getting Started with VMware vSphere Container Storage Plug-in

Procedure

1 Create a PVC.

a Create a new StorageClass or edit the existing StorageClass to set allowVolumeExpansion


to true.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-block-sc
provisioner: csi.vsphere.vmware.com
allowVolumeExpansion: true

b Create or edit a PVC that references the StorageClass.

Ensure that the PVC is in bound state. If you are using a statically provisioned PVC,
ensure that the PVC and the PV specifications have the storageClassName parameter
pointing to the StorageClass with the allowVolumeExpansion set to true.

c Deploy the PVC.

$ kubectl get pvc,pv


NAME STATUS
VOLUME CAPACITY ACCESS MODES
STORAGECLASS AGE
persistentvolumeclaim/example-block-pvc Bound pvc-9e9a325d-ee1c-11e9-
a223-005056ad1fc1 1Gi RWO example-block-sc 5m5s

NAME CAPACITY ACCESS MODES


RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON
AGE
persistentvolume/pvc-9e9a325d-ee1c-11e9-a223-005056ad1fc1 1Gi RWO
Delete Bound default/example-block-pvc example-block-sc
5m18s

VMware by Broadcom 113


Getting Started with VMware vSphere Container Storage Plug-in

2 Expand the PVC in offline mode.

a Patch the PVC to increase the storage size, in this example, to 2Gi.

$ kubectl patch pvc example-block-pvc -p '{"spec": {"resources": {"requests":


{"storage": "2Gi"}}}}'
persistentvolumeclaim/example-block-pvc patched

This action triggers an expansion in the volume associated with the PVC. The capacity
of the corresponding PV object changes. However, the capacity of the PVC does not
change until the PVC is used by a pod and mounted on a node.

$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY
STATUS CLAIM STORAGECLASS REASON AGE
pvc-9e9a325d-ee1c-11e9-a223-005056ad1fc1 2Gi RWO Delete
Bound default/example-block-pvc example-block-sc 6m44s

$ kubectl get pvc


NAME STATUS VOLUME CAPACITY ACCESS
MODES STORAGECLASS AGE
example-block-pvc Bound pvc-9e9a325d-ee1c-11e9-a223-005056ad1fc1 1Gi
RWO example-block-sc 6m57s

You can also see a FilesystemResizePending condition applied on the PVC when you
describe it.

b Create a pod to use the PVC.

apiVersion: v1
kind: Pod
metadata:
name: example-block-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox:1.24
command: ["/bin/sh", "-c", "echo 'hello' > /mnt/volume1/index.html && chmod
o+rX /mnt /mnt/volume1/index.html && while true ; do sleep 2 ; done"]
volumeMounts:
- name: test-volume
mountPath: /mnt/volume1
restartPolicy: Never
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: example-block-pvc

$ kubectl create -f example-pod.yaml


pod/example-block-pod created

VMware by Broadcom 114


Getting Started with VMware vSphere Container Storage Plug-in

The Kubelet on the node triggers the filesystem expansion on the volume when the PVC
is attached to the pod.

$ kubectl get pvc


NAME STATUS VOLUME CAPACITY ACCESS
MODES STORAGECLASS AGE
example-block-pvc Bound pvc-24114458-9753-428e-9c90-9f568cb25788 2Gi
RWO example-block-sc 2m12s

$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY
STATUS CLAIM STORAGECLASS REASON AGE
pvc-24114458-9753-428e-9c90-9f568cb25788 2Gi RWO Delete
Bound default/example-block-pvc example-block-sc 2m3s

$ kubectl get pod


NAME READY STATUS RESTARTS AGE
example-block-pod 1/1 Running 0 65s

The capacity of PVC is modified and the FilesystemResizePending condition is removed


from the PVC. Offline volume expansion is complete.

Deploy Kubernetes and Persistent Volumes on a vSAN


Stretched Cluster
You can deploy a generic Kubernetes cluster and persistent volumes on vSAN stretched clusters.
You can deploy multiple Kubernetes clusters with different storage requirements in the same
vSAN stretched cluster.

vSAN stretched clusters support file volumes backed by vSAN file shares. For more information,
see Provisioning File Volumes with vSphere Container Storage Plug-in.

Prerequisites

When you plan to configure a Kubernetes cluster on a vSAN stretched cluster, consider the
following items:

n A generic Kubernetes cluster does not enforce the same storage policy on the node VMs and
on the persistent volumes. The vSphere administrator is responsible for the correct storage
policy configuration, assignment, and use of the storage policies within the Kubernetes
clusters.

n Use the VM storage policy with the same replication and site affinity settings for all storage
objects on the Kubernetes cluster. The same storage policy should be used for all node VMs,
including the control plane and worker, and all PVs.

n The topology feature cannot be used to provision a volume that belongs to a specific fault
domain within the vSAN stretched cluster.

VMware by Broadcom 115


Getting Started with VMware vSphere Container Storage Plug-in

Procedure

1 Set up your vSAN stretched cluster.

a Create a vSAN stretched cluster.

For more information, search for vSAN stretched cluster on the VMware vSAN
Documentation site.

b Turn on DRS on the stretched cluster.

c Turn on vSphere HA.

Make sure to set up Host Monitoring.

d Enable host monitoring and configure host failure response, response for host isolation,
and VM monitoring.

Note VMware recommends you to disable VM Component Protection (VMCP) when all
Node VMs and Volumes are deployed on the vSAN Datastore.

n Disable Datastore with PDL.

n Disable Datastore with APD.

VMware by Broadcom 116


Getting Started with VMware vSphere Container Storage Plug-in

2 Create a VM storage policy compliant with the vSAN stretched cluster requirements.

a Configure Site disaster tolerance.

Select Dual site mirroring to have data mirrored at both sites of the stretched cluster.

b Specify Failures to tolerate.

For the stretched cluster, the setting defines the number of disk or host failures a storage
object can tolerate for each of the site. The number of required fault domains, or hosts
within a site for the stretched cluster, in order to tolerate n failures is 2n + 1 for
mirroring.

Raid-1 mirroring provides better performance. Raid-5 and Raid-6 achieve failure
tolerance using parity blocks, which provides better space efficiency. These options are
available only on all-flash clusters.

VMware by Broadcom 117


Getting Started with VMware vSphere Container Storage Plug-in

c Enable Force provisioning.

VMware by Broadcom 118


Getting Started with VMware vSphere Container Storage Plug-in

3 Create VM-Host affinity rules to place Kubernetes nodes on specific primary or secondary
site, such as Site-A.

For information about affinity rules, see Create a VM-Host Affinity Rule in the vSphere
Resource Management documentation.
4 Deploy Kubernetes VMs using the vSAN stretched cluster storage policy.

5 Create a storage class using the vSAN stretched cluster storage policy.

6 Deploy persistent volumes using the vSAN stretched cluster storage class.

What to do next

Depending on your needs and environment, you can use one of the following deployment
scenarios when deploying your Kubernetes cluster and workloads on the vSAN stretched cluster.

Deployment 1
In this deployment, the control plane and worker nodes are placed on the primary site, but
flexible enough to failover on another site, if the primary site fails. You deploy HA Proxy on the
primary site. This is also known as an Active-Passive deployment because only one site of the
stretched vSAN cluster is used to deploy VMs.

If you plan to use file volumes (RWX volumes), it is recommended to configure the vSAN file
service domain to place file servers on the active site (preferred site). This reduces the cross-site
traffic latency and delivers better performance for applications using file volumes.

VMware by Broadcom 119


Getting Started with VMware vSphere Container Storage Plug-in

Requirements for Deployment 1


Requirements Parameters

Node Placement n The control plane and worker nodes are on the primary site. They are
flexible enough to failover to another site, if the primary site fails.
n HA Proxy on the primary site.

Failure to Tolerate At least FTT1

DRS Enabled

Site Disaster Tolerance Dual Site Mirroring

Storage Policy Force Provisioning Enabled

vSphere HA Enabled

Potential Failover Scenarios for Deployment 1


The following table describes potential failover scenarios that might occur when you use
deployment model 1.

Scenario Description

Several ESXi hosts fail on the n Kubernetes node VMs move from unavailable hosts to the available hosts within
primary site. primary sites.
n If the worker node needs to be restarted, pods running on that node can be
rescheduled and recreated on another node.
n If the control plane node needs to be restarted, the existing application workload
does not get affected.

The entire primary site and all n Kubernetes node VMs move from the primary site to the secondary site.
hosts on the site fail. n You experience a complete downtime until node VMs restart on the secondary
site.

Several hosts fail on the The failure does not affect the Kubernetes cluster because the entire cluster is at the
secondary site. primary site.

The entire secondary site and all n The failure does not affect the Kubernetes cluster because the entire cluster is at
hosts on the site fail. the primary site.
n Replication for storage objects stops because the secondary site is not available.

Intersite network failure occurs. n The failure does not affect the Kubernetes cluster because the entire cluster is at
the primary site.
n Replication for storage objects stops because the secondary site is not available.

Deployment 2
With this model, place the control plane nodes on the primary site and worker nodes can be
spread across the primary and secondary site. You deploy HA Proxy on the primary site.

VMware by Broadcom 120


Getting Started with VMware vSphere Container Storage Plug-in

Requirements for Deployment 2


Requirements Parameters

Node Placement n The control plane nodes on the primary site.


n Worker nodes spread across the primary and secondary site.
n HA Proxy on the primary site.

Failure to Tolerate At least FTT1

DRS Enabled

Site Disaster Tolerance Dual Site Mirroring

Storage Policy Force Provisioning Enabled

vSphere HA Enabled

Potential Failover Scenarios for Deployment 2


The following table describes potential failover scenarios that might occur when you deploy a
Kubernetes cluster using the Deployment 2 model.

Scenario Description

Several ESXi hosts fail on the n Kubernetes node VMs move from unavailable hosts to the available hosts within
primary site. the same site. If resources are not available, they move to anther site.
n If the worker node needs to be restarted, pods running on that node might be
rescheduled and recreated on another node.
n If the control plane node needs to be restarted, the existing application workload
does not get affected.

The entire primary site and all n Kubernetes control plane node VMs and some worker nodes present on the
hosts on the site fail. primary site move from the primary site to the secondary site.
n Expect the control plane downtime until the control plane nodes restart on the
secondary site.
n Expect partial downtime for pods running on the worker nodes on the primary
site.
n Pods deployed on the worker nodes on the secondary site are not affected.

Several hosts fail on the Node VMs and pods running on the node VMs restart on another host.
secondary site.

The entire secondary site and all n Kubernetes control plane is unaffected.
hosts on the site fail. n Kubernetes control plane nodes move to the primary site.
n Pods deployed on the worker nodes on the secondary site are affected. They
restart along with node VMs.

Intersite network failure occurs. n Kubernetes control plane is unaffected.


n Kubernetes worker nodes move to the primary site.
n Pods deployed on the worker nodes on the secondary site are affected. They
restart along with node VMs.

VMware by Broadcom 121


Getting Started with VMware vSphere Container Storage Plug-in

Deployment 3
In this deployment model, you can place two control plane nodes on the primary site and one
control plane node on the secondary site. Deploy HA Proxy on the primary site. Worker nodes
can be on any site.

Requirements for Deployment 3


You can use this deployment model if you have equal resources at both the primary, or
preferred, fault domain and the secondary, non-preferred, fault domain and you want to use
hardware located at both fault domains. Since both fault domains have some workload running,
in case of a complete site failure, this deployment model will help with faster recovery.

This model requires specific DRS policy rules. One rule to specify affinity between two control
plane nodes and the primary site and another rule for affinity between the third control plane
node and the secondary site.

Requirements Parameters

Node Placement n Two control plane nodes on the primary site.


n One control plane node on the secondary site.
n HA Proxy on the primary site.
n Worker nodes on any site.

Failure to Tolerate At least FTT1

DRS Enabled

Site Disaster Tolerance Dual Site Mirroring

Storage Policy Force Provisioning Enabled

vSphere HA Enabled

Potential Failover Scenarios for Deployment 3


The following table describes potential failover scenarios that might occur when you use the
Deployment 3 model.

Scenario Description

Several ESXi hosts fail on the n Affected nodes get restarted on the available host on the primary site.
primary site. n If both control plane nodes are present on the failed host on the primary site, the
control plane will be down until both control plane node recover on the available
hosts on the primary site.
n While nodes are restarting on available hosts, pods might get rescheduled and
recreated on available nodes.

The entire primary site and all n Node VMs move from the primary site to the secondary site.
hosts on the site fail. n Expect a downtime until node VMs restart on the secondary site.

Several hosts fail on the n Node VMs and pods running on the node VMs restart on another host.
secondary site. n If a control plane node on the secondary site is affected, Kubernetes control
plane remains unaffected. Kubernetes remains accessible through two master
nodes on the primary site.

VMware by Broadcom 122


Getting Started with VMware vSphere Container Storage Plug-in

Scenario Description

The entire secondary site and all n The control plane node and worker nodes from the secondary site are migrated
hosts on the site fail. to the primary site.
n Pods deployed on the worker nodes on the secondary site are affected. They
restart along with the node VMs.

Intersite network failure occurs. n Kubernetes control plane is unaffected.


n Kubernetes nodes move to the primary site.
n Pods deployed on the worker nodes on the secondary site are affected. They
restart along with the node VMs.

Upgrade Kubernetes and Persistent Volumes on vSAN Stretched


Clusters
If you already have Kubernetes deployments on a vSAN datastore, you can upgrade your
deployments after enabling vSAN stretched clusters on the datastore.

Procedure

1 Edit existing VM storage policy used for provisioning volumes and node VMs on the vSAN
cluster to add stretched cluster parameters.

2 Apply updated storage policy on all objects.

3 Apply updated storage policy on the persistent volumes that have Out of date status.

For information, see Monitor Container Volumes Across Kubernetes Clusters.

Using vSphere Container Storage Plug-in for HCI Mesh


Deployment
vSphere Container Storage Plug-in version 3.1.0 and later supports VMware vSAN HCI Mesh
feature for block volumes on vSphere version 7.0 Update 3 and later.

HCI Mesh is a software based approach for disaggregation of compute and storage resources
in vSAN. It brings multiple independent vSAN clusters together by enabling cross cluster use of
remote datastore capacity within vCenter Server. HCI Mesh enables you to efficiently use and
consume data center resources, which provides simple storage management at scale. You can
create a HCI Mesh by mounting remote vSAN datastores on vSAN clusters, and enable data
sharing from vCenter Server.

The following image displays the HCI Mesh Configuration.

VMware by Broadcom 123


Getting Started with VMware vSphere Container Storage Plug-in

AZ 1 AZ 2 AZ 3

Kubernetes Cluster

Modern App (example: Kafka)

Traditional App (example: Jenkins)

Local App Local App Local App

vSAN vSAN vSAN


Cluster Cluster Cluster

vSAN vSAN vSAN


Datastore Datastore HCI Mesh Datastore
Remote
Mount

vSAN
Datastore

vSAN Storage
Cluster

Note HCI Mesh does not support remote vSAN datastores on stretched clusters. For more
information on sharing remote datastores with HCI Mesh, see Sharing Remote Datastores with
HCI Mesh.

Considerations when Using HCI Mesh with vSphere Container


Storage Plug-in
When you use HCI Mesh capabilities with vSphere Container Storage Plug-in, consider the
following:

n If you have a SPBM policy that is compatible with all the vSAN datastores in a HCI Mesh
deployment, you can use it in StorageClasses in the Kubernetes cluster. However, vSphere
Container Storage Plug-in will select a local vSAN datastore or a remote vSAN datastore for
volume placement. If there is a difference in data path performance between the two types
of datastores, and if you want to offer two different SLAs for your applications, you can
create two separate policies and storage classes.

VMware by Broadcom 124


Getting Started with VMware vSphere Container Storage Plug-in

n If you have two vSAN clusters and have mounted a remote vSAN datastore on one of the
clusters, you can create two SPBM policies per cluster. One policy is assigned for the local
vSAN datastore, and the other one is assigned for the remote vSAN datastore. Later, create
two StorageClass objects in the Kubernetes cluster, one for each policy. This allows you to
assign different SLAs for the two datastores. Refer to the following command to create a
SPBM policy for every vSAN cluster.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: block-volume-local-vsan-cluster
provisioner: csi.vsphere.vmware.com
parameters:
storagepolicyname: "local-vsan-cluster-policy"

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: block-volume-remote-vsan-cluster
provisioner: csi.vsphere.vmware.com
parameters:
storagepolicyname: "remote-vsan-cluster-policy"

Limitations
The following limitations apply when you use vSphere Container Storage Plug-in with HCI Mesh
deployment.

1 vSphere Container Storage Plug-in does not support RWX and File Volumes on HCI Mesh
deployments.

2 All objects that comprise a VM must reside on the same datastore when you use HCI Mesh
deployment. For more information, see Sharing Remote Datastores with HCI Mesh.

3 vSphere Container Storage Plug-in version 3.1.0 does not support vSAN stretch in
combination with HCI Mesh. For more information, see Sharing Remote Datastores with HCI
Mesh.

Volume Snapshot and Restore


vSphere Container Storage Plug-in supports the volume snapshot and restore capabilities. You
can use a snapshot to provision a new volume, pre-populated with the snapshot data. You can
also restore the existing volume to a previous state represented by the snapshot.

Volume Snapshot and Restore Requirements


Your environment must meet general requirements that apply to vSphere Container Storage
Plug-in. For more information, see Preparing for Installation of vSphere Container Storage Plug-in.

VMware by Broadcom 125


Getting Started with VMware vSphere Container Storage Plug-in

In addition, follow these requirements to use the volume snapshot and restore feature with
vSphere Container Storage Plug-in:

n CSI upstream external-snapshotter/snapshot-controller version 5.0.1 or higher.

n vSphere version 7.0 Update 3 or higher.

The minimum version applies to both vCenter Server and ESXi.

n Volume Snapshot CRD v1 is supported.

n Volume Snapshot CRD v1beta1 and v1alpha1 are not supported.

Note Volume Snapshots are supported only for block volumes.

Enable Volume Snapshot and Restore


Enable the volume snapshot and restore capabilities for vSphere Container Storage Plug-in.

Procedure

1 Install the version of vSphere Container Storage Plug-in that supports volume snapshots.

See Supported Kubernetes Functionality and Deploying the vSphere Container Storage Plug-
in on a Native Kubernetes Cluster.

2 Deploy the required components using the following script available at:

https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/master/manifests/vanilla/deploy-
csi-snapshot-components.sh

To obtain a detailed workflow of the script, check out by running bash deploy-csi-snapshot-
components.sh -h command.

% ./deploy-csi-snapshot-components.sh
No existing snapshot-controller Deployment found, deploying it now..
Start snapshot-controller deployment...
customresourcedefinition.apiextensions.k8s.io/
volumesnapshotclasses.snapshot.storage.k8s.io created
Created CRD volumesnapshotclasses.snapshot.storage.k8s.io
customresourcedefinition.apiextensions.k8s.io/
volumesnapshotcontents.snapshot.storage.k8s.io created
Created CRD volumesnapshotcontents.snapshot.storage.k8s.io
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io
created
Created CRD volumesnapshots.snapshot.storage.k8s.io
✅ Deployed VolumeSnapshot CRDs
serviceaccount/snapshot-controller unchanged
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner unchanged
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role unchanged
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection unchanged
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection unchanged
✅ Created RBACs for snapshot-controller
deployment.apps/snapshot-controller created

VMware by Broadcom 126


Getting Started with VMware vSphere Container Storage Plug-in

deployment.apps/snapshot-controller image updated


deployment.apps/snapshot-controller patched
deployment.apps/snapshot-controller patched
Waiting for deployment spec update to be observed...
Waiting for deployment "snapshot-controller" rollout to finish: 0 out of 2 new replicas
have been updated...
Waiting for deployment "snapshot-controller" rollout to finish: 1 out of 2 new replicas
have been updated...
Waiting for deployment "snapshot-controller" rollout to finish: 1 out of 2 new replicas
have been updated...
Waiting for deployment "snapshot-controller" rollout to finish: 1 out of 2 new replicas
have been updated...
Waiting for deployment "snapshot-controller" rollout to finish: 1 out of 2 new replicas
have been updated...
Waiting for deployment "snapshot-controller" rollout to finish: 1 out of 2 new replicas
have been updated...
Waiting for deployment "snapshot-controller" rollout to finish: 1 out of 2 new replicas
have been updated...
Waiting for deployment "snapshot-controller" rollout to finish: 1 out of 2 new replicas
have been updated...
Waiting for deployment "snapshot-controller" rollout to finish: 1 of 2 updated replicas
are available...
Waiting for deployment "snapshot-controller" rollout to finish: 1 of 2 updated replicas
are available...
deployment "snapshot-controller" successfully rolled out

✅ Successfully deployed snapshot-controller

No existing snapshot-validation-deployment Deployment found, deploying it now..


creating certs in tmpdir /var/folders/31/y77ywvzd6lqc0g60r4xnfyd80000gp/T/tmp.HmdOwrGg7f
Generating a 2048 bit RSA private key
...........................................................................................
..+++
.........................+++
writing new private key to '/var/folders/31/y77ywvzd6lqc0g60r4xnfyd80000gp/T/
tmp.HmdOwrGg7f/ca.key'
-----
Generating RSA private key, 2048 bit long modulus
...............................................................+++
.............................................................................+++
e is 65537 (0x10001)
Signature ok
subject=/CN=snapshot-validation-service.kube-system.svc
Getting CA Private Key
secret "snapshot-webhook-certs" deleted
secret/snapshot-webhook-certs created
service "snapshot-validation-service" deleted
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2238 100 2238 0 0 9060 0 --:--:-- --:--:-- --:--:-- 9024
service/snapshot-validation-service created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation-
webhook.snapshot.storage.k8s.io configured
deployment.apps/snapshot-validation-deployment created
deployment.apps/snapshot-validation-deployment image updated

VMware by Broadcom 127


Getting Started with VMware vSphere Container Storage Plug-in

deployment.apps/snapshot-validation-deployment patched
Waiting for deployment spec update to be observed...
Waiting for deployment spec update to be observed...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 0 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 0 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 1 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 1 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 1 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 1 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 1 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 2 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 2 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 2 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 2 out of 3 new
replicas have been updated...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 1 old replicas
are pending termination...
Waiting for deployment "snapshot-validation-deployment" rollout to finish: 1 old replicas
are pending termination...
deployment "snapshot-validation-deployment" successfully rolled out

✅ Successfully deployed snapshot-validation-deployment

csi-snapshotter side-car not found in vSphere CSI Driver Deployment, patching..


creating patch file in tmpdir /var/folders/31/y77ywvzd6lqc0g60r4xnfyd80000gp/T/
tmp.GiMDimfZdq
Scale down the vSphere CSI driver
deployment.apps/vsphere-csi-controller scaled
Patching vSphere CSI driver..
deployment.apps/vsphere-csi-controller patched
Scaling the vSphere CSI driver back to original state..
deployment.apps/vsphere-csi-controller scaled
Waiting for deployment spec update to be observed...
Waiting for deployment spec update to be observed...
Waiting for deployment "vsphere-csi-controller" rollout to finish: 0 out of 3 new replicas
have been updated...
Waiting for deployment "vsphere-csi-controller" rollout to finish: 0 of 3 updated replicas
are available...
Waiting for deployment "vsphere-csi-controller" rollout to finish: 1 of 3 updated replicas
are available...
Waiting for deployment "vsphere-csi-controller" rollout to finish: 2 of 3 updated replicas

VMware by Broadcom 128


Getting Started with VMware vSphere Container Storage Plug-in

are available...
deployment "vsphere-csi-controller" successfully rolled out

✅ Successfully deployed all components for CSI Snapshot feature!

Note
n snapshot-validation-deployment 5.0.1 validation webhook is also deployed as a part of
the deployment script.

n Perform a version check only if snapshot-controller, snapshot-validation-deployment, csi-


snapshotter, and CRDs already exist.

n If the component version number is incorrect, the deployment fails with an error message.

n If there is a mismatch in the existing component version number, manually upgrade the
component, or delete it. After deletion, the script will deploy the appropriate version of
the component.

Configure Maximum Number of Snapshots per Volume


Configure the maximum number of snapshots per volume for the vSphere Container Storage
Plug-in.

The configuration parameters are listed below. You must configure the parameters only when the
default constraint does not work for your use cases. Otherwise, you can skip the configuration
steps.

Parameter Description

global-max-snapshots-per-block-volume Global configuration parameter that applies to volumes on


all kinds of datastores. By default, it is set to three.

granular-max-snapshots-per-block-volume-vsan Granular configuration parameter on vSAN datastore only.


It overrides the global constraint if set, while it falls back to
the global constraint if unset. A maximum value of 32 can
be set under vSAN ESA setup.

granular-max-snapshots-per-block-volume-vvol Granular configuration parameter on Virtual Volumes


datastore only. It overrides the global constraint if set. It
falls back to the global constraint if unset.

Prerequisites

n For a better performance, use two to three snapshots per virtual disk. For more information,
see Best practices for using VMware snapshots in the vSphere environment.

VMware by Broadcom 129


Getting Started with VMware vSphere Container Storage Plug-in

n Ensure that the maximum number of snapshots per volume is configurable, and its default is
set to three.

Note
n The best practice guideline applies only to virtual disks on VMFS and NFS datastores while
not to those on Virtual Volumes and vSAN datastores.

n Granular configuration parameters are introduced apart from the global configuration
parameter.

Procedure

1 Delete the secret that stores the vSphere configuration.

Kubernetes does not allow you to update secret resources in place.

kubectl delete secret vsphere-config-secret --namespace=vmware-system-csi

2 Update the config file of vSphere Container Storage Plug-in and add configuration
parameters for the snapshot feature under the [Snapshot] section.

$ cat /etc/kubernetes/csi-vsphere.conf
[Global]
...

[Snapshot]
global-max-snapshots-per-block-volume = 5 # optional, set to 3 if unset
granular-max-snapshots-per-block-volume-vsan = 7 # optional, fall back to the global
constraint if unset
granular-max-snapshots-per-block-volume-vvol = 8 # optional, fall back to the global
constraint if unset
...

3 Create a new secret with the updated config file.

kubectl create secret generic vsphere-config-secret --from-file=csi-vsphere.conf --


namespace=vmware-system-csi

Using Volume Snapshot and Restore


After you enable the volume snapshot and restore capabilities for vSphere Container Storage
Plug-in, you can create a snapshot dynamically or statically. You can also create a PVC from a
volume snapshot.

The following is an example to create a storageclass volume snapshot. Optional parameters are
commented.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-vanilla-rwo-filesystem-sc
annotations:

VMware by Broadcom 130


Getting Started with VMware vSphere Container Storage Plug-in

storageclass.kubernetes.io/is-default-class: "true" # Optional


provisioner: csi.vsphere.vmware.com
allowVolumeExpansion: true # Optional: only applicable to vSphere 7.0U1 and above
parameters:
csi.storage.k8s.io/fstype: "ext4"
# datastoreurl: "ds:///vmfs/volumes/vsan:52cdfa80721ff516-ea1e993113acfc77/" # Optional
Parameter
# storagepolicyname: "vSAN Default Storage Policy" # Optional Parameter

Volume Snapshot and Restore Limitations


Before you start using the volume snapshot and restore feature, consider the following
limitations:

n When you create a PVC from a VolumeSnapshot, it should reside on the same datastore
as the original VolumeSnapshot. Otherwise, the provisioning of that PVC will fail with the
following error:

failed to provision volume with StorageClass "vmfs12": rpc error: code = Internal
desc = failed to create volume. Error: failed to get the compatible datastore for
create volume from snapshot 0a3ce642-2c19-4d50-9534-7889b2a6db52+fc01aaa4-29d8-4a68-90ba-
b1d53bb0657d with error: failed to find datastore with URL "ds:///vmfs/volumes/
62fd07ba-4b18326e-137a-1c34da62fa18/" from the input datastore list, [Datastore:datastore-
33 Datastore:datastore-34]

Note The datastore for the target PVC that you create from the VolumeSnapshot is
determined by the StorageClass in the PVC definition. Make sure that the StorageClass of
the target PVC and the StorageClass of the original source PVC point to the same datastore,
which is the datastore of the source PVC. This rule also applies to the topology requirements
in the StorageClass definitions. The requirements must also point to the same common
datastore. Any conflicting topology requirements result in the same error as shown above.

n You cannot delete or expand a volume that contains associated snapshots. Delete all
snapshots to expand or delete a volume.

n When you create a volume from a snapshot, ensure that the size of the volume matches the
size of the snapshot.

Create Dynamically Provisioned Snapshots


You can dynamically provision a snapshot for vSphere Container Storage Plug-in.

Procedure

1 Create a StorageClass.

$ kubectl apply -f example-sc.yaml


$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY
VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
example-vanilla-rwo-filesystem-sc (default) csi.vsphere.vmware.com Delete
Immediate true 2s

VMware by Broadcom 131


Getting Started with VMware vSphere Container Storage Plug-in

2 Create a PVC.

$ kubectl apply -f example-pvc.yaml


$ kubectl get pvc
NAME STATUS VOLUME CAPACITY
ACCESS MODES STORAGECLASS AGE
example-vanilla-rwo-pvc Bound pvc-2dc37ea0-dee0-4ad3-96ca-82f0159d7532 5Gi
RWO example-vanilla-rwo-filesystem-sc 7s

3 Create a VolumeSnapshotClass.

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: example-vanilla-rwo-filesystem-snapshotclass
driver: csi.vsphere.vmware.com
deletionPolicy: Delete
$ kubectl apply -f example-snapshotclass.yaml
$ kubectl get volumesnapshotclass
NAME DRIVER DELETIONPOLICY
AGE
example-vanilla-rwo-filesystem-snapshotclass csi.vsphere.vmware.com Delete 4s

4 Create a VolumeSnapshot.

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: example-vanilla-rwo-filesystem-snapshot
spec:
volumeSnapshotClassName: example-vanilla-rwo-filesystem-snapshotclass
source:
persistentVolumeClaimName: example-vanilla-rwo-pvc
$ kubectl apply -f example-snapshot.yaml
$ kubectl get volumesnapshot
NAME READYTOUSE SOURCEPVC
SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS
SNAPSHOTCONTENT CREATIONTIME AGE
example-vanilla-rwo-filesystem-snapshot true example-vanilla-rwo-
pvc 5Gi example-vanilla-rwo-filesystem-snapshotclass
snapcontent-a7c00b7f-f727-4010-9b1a-d546df9a8bab 57s 58s

Note For more information on the yaml files mentioned in the above steps,
see https://github.com/kubernetes-sigs/vsphere-csi-driver/tree/master/example/vanilla-k8s-
RWO-filesystem-volumes.

Create Pre-Provisioned Snapshots


Pre-provision a snapshot for the vSphere Container Storage Plug-in.

VMware by Broadcom 132


Getting Started with VMware vSphere Container Storage Plug-in

Prerequisites

n Ensure that an FCD snapshot is available in your vSphere environment.

Note
n Pre-provisioned CSI snapshots are supported for CNS/FCD snapshots created using
Kubernetes VolumeSnapshot APIs for vSphere 7.0 Update 3 and later.

n Pre-provisioned CSI snapshots are not supported for FCD snapshots created using FCD
APIs directly.

n Construct the snapshot handle based on the combination of FCD Volume ID and
FCD Snapshot ID of the snapshot. For example, if the FCD Volume ID and FCD
Snapshot ID for a FCD snapshot are 4ef058e4-d941-447d-a427-438440b7d306 and 766f7158-
b394-4cc1-891b-4667df0822fa, the snapshot handle constructed is 4ef058e4-d941-447d-
a427-438440b7d306+766f7158-b394-4cc1-891b-4667df0822fa.

n Update the spec.source.snapshotHandle field in the VolumeSnapshotContent object of the


example-static-snapshot.yaml with the snapshot handle constructed in the above example.

Procedure

u Create a pre-provisioned volume snapshot.

$ kubectl apply -f example-static-snapshot.yaml


$ kubectl get volumesnapshot static-vanilla-rwo-filesystem-snapshot
NAME READYTOUSE SOURCEPVC
SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS
SNAPSHOTCONTENT CREATIONTIME AGE
static-vanilla-rwo-filesystem-snapshot true static-vanilla-rwo-
filesystem-snapshotcontent 5Gi static-vanilla-rwo-filesystem-
snapshotcontent 76m 22m

Restore Volume Snapshots


You can restore a volume snapshot that is already created with vSphere Container Storage
Plug-in.

Procedure

1 Ensure that the volume snapshot that you want to restore is available in the current
Kubernetes cluster.

$ kubectl get volumesnapshot


NAME READYTOUSE SOURCEPVC
SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS
SNAPSHOTCONTENT CREATIONTIME AGE
example-vanilla-rwo-filesystem-snapshot true example-vanilla-rwo-
pvc 5Gi example-vanilla-rwo-filesystem-snapshotclass
snapcontent-a7c00b7f-f727-4010-9b1a-d546df9a8bab 22m 22m

VMware by Broadcom 133


Getting Started with VMware vSphere Container Storage Plug-in

2 Create a PVC from a volume snapshot.

$ kubectl create -f example-restore.yaml


$ kubectl get pvc
NAME STATUS
VOLUME CAPACITY ACCESS MODES
STORAGECLASS AGE
example-vanilla-rwo-filesystem-restore Bound
pvc-202c1dfc-78be-4835-89d5-110f739a87dd 5Gi RWO example-vanilla-rwo-
filesystem-sc 78s

Collecting Metrics with Prometheus to Monitor vSphere


Container Storage Plug-in
You can use Prometheus to collect vSphere Container Storage Plug-in metrics. You can then
visualize these metrics with Grafana dashboards to monitor health and stability of vSphere
Container Storage Plug-in.

What Is Prometheus and Grafana?


Prometheus is an open-source monitoring software that collects, organizes, and stores metrics
along with unique identifiers and timestamps. vSphere Container Storage Plug-in exposes its
metrics so that Prometheus can collect them.

Using the information captured in Prometheus, you can build Grafana dashboards that help you
analyse and understand the health and behavior of vSphere Container Storage Plug-in.

For more information, see the Prometheus documentation at https://prometheus.io/docs/


introduction/overview/.

Exposing Prometheus Metrics


Prometheus collects metrics from targets by scraping metrics HTTP endpoints.

In the controller pod of vSphere Container Storage Plug-in, the following two containers expose
metrics:

n The vsphere-csi-controller container exposes Prometheus metrics from port 2112.

The container provides communication from the Kubernetes Cluster API server to the CNS
component on vCenter Server for volume lifecycle operations.

n The vsphere-syncer container exposes Prometheus metrics from port 2113.

The container sends metadata information about persistent volumes to the CNS component
on vCenter Server, so that it can be displayed in the vSphere Client in the Container Volumes
view.

VMware by Broadcom 134


Getting Started with VMware vSphere Container Storage Plug-in

View Prometheus Metrics


You can view Prometheus metrics exposed by vsphere-csi-controller service of vSphere
Container Storage Plug-in.

1 Get the Cluster IP of the vsphere-csi-controller service.

# kubectl get service -n vmware-system-csi


NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
vsphere-csi-controller ClusterIP 10.100.XXX.XX <none> 2112/TCP,2113/TCP 23h

2 View Prometheus metrics exposed by the vsphere-csi-controller service.

To get metrics exposed at a specific port, use the appropriate command.

Action Command

Get metrics exposed at port 2112 # curl 10.100.XXX.XX:2112/metrics

Get metrics exposed at port 2113 # curl 10.100.XXX.XX:2113/metrics

Prometheus Metrics Exposed by vSphere Container Storage Plug-in


Name Type Description Example

vsphere_csi_info Gauge Metrics that indicates the vsphere-csi- vsphere_csi_info{version=


controller container version. "16b7a33"} 1

vsphere_syncer_info Gauge Metrics that indicates the vsphere- vsphere_syncer_info{versi


syncer container version. on="16b7a33"} 1

vsphere_cns_volume_ops_hi Vector of Histogram vector metrics to observe vsphere_cns_volume_ops_hi


stogram histogram various control operations on CNS. stogram_bucket{optype="at
The optype field indicates the type of tach-
the CNS volume operation. volume",status="pass",le=

The value of optype can be the "1"} 1

following: vsphere_cns_volume_ops_hi

n create-volume stogram_sum{optype="attac
h-volume",status="pass"}
n delete-volume
6.611152518
n attach-volume
vsphere_cns_volume_ops_hi
n detach-volume
stogram_count{optype="att
n update-volume-metadata
ach-
n expand-volume
volume",status="pass"} 3
n query-volume
n query-all-volume
n query-volume-info
n relocate-volume
n configure-volume-acl
n query-snapshots
n create-snapshot
n delete-snapshot
The value of the status field can be
pass or fail.

VMware by Broadcom 135


Getting Started with VMware vSphere Container Storage Plug-in

Name Type Description Example

vsphere_csi_volume_ops_hi Vector of Histogram vector metrics to observe vsphere_csi_volume_ops_hi


stogram histogram various control operations in vSphere stogram_bucket{optype="cr
Container Storage Plug-in. eate-
The optype field indicates the type of volume",status="pass",vol
the volume operation performed by type="block",le="7"} 3
vSphere Container Storage Plug-in. vsphere_csi_volume_ops_hi
The value of optype can be the stogram_sum{optype="creat
following: e-
n create-volume volume",status="pass",vol

n delete-volume type="block"} 9.983518201

n attach-volume vsphere_csi_volume_ops_hi
stogram_count{optype="cre
n detach-volume
ate-
n expand-volume
volume",status="pass",vol
n create-snapshot
type="block"} 3
n delete-snapshot
n list-snapshot
The value of the status field can be
pass or fail.

vsphere_full_sync_ops_histo Vector of Histogram vector metric to observe vsphere_full_sync_ops_his


gram histogram the full synchronization operation of togram_bucket{status="pas
vSphere Container Storage Plug-in. s",le="7"} 73
The value of the status field can be vsphere_full_sync_ops_his
pass or fail. togram_sum{status="pass"}
7.559699346999998

vsphere_full_sync_ops_his
togram_count{status="pass
"} 73

Deploy Prometheus and Build Grafana Dashboards


Follow this sample workflow to deploy a Prometheus and build Grafana dashboards.

Deploy Prometheus Monitoring Stack


Deploy a Prometheus monitoring stack that includes AlertManager and Grafana.

Procedure

1 Clone the kube-prometheus repository from GitHub.

% git clone https://github.com/prometheus-operator/kube-prometheus


Cloning into 'kube-prometheus'...
remote: Enumerating objects: 15523, done.
remote: Counting objects: 100% (209/209), done.
remote: Compressing objects: 100% (119/119), done.
remote: Total 15523 (delta 126), reused 123 (delta 78), pack-reused 15314
Receiving objects: 100% (15523/15523), 7.79 MiB | 542.00 KiB/s, done.
Resolving deltas: 100% (9884/9884), done.

% cd kube-prometheus

VMware by Broadcom 136


Getting Started with VMware vSphere Container Storage Plug-in

% ls
CHANGELOG.md experimental
CONTRIBUTING.md go.mod
DCO go.sum
LICENSE jsonnet
Makefile jsonnetfile.json
README.md jsonnetfile.lock.json
RELEASE.md kubescape-exceptions.json
build.sh kustomization.yaml
code-of-conduct.md manifests
developer-workspace scripts
docs sync-to-internal-registry.jsonnet
example.jsonnet tests
examples
%

VMware by Broadcom 137


Getting Started with VMware vSphere Container Storage Plug-in

2 Apply the kube-prometheus manifests.

a Create the CRDs used by the Prometheus stack.

% kubectl apply --server-side -f manifests/setup


customresourcedefinition.apiextensions.k8s.io/
alertmanagerconfigs.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com
serverside-applied
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com
serverside-applied
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com
serverside-applied
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com
serverside-applied
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com
serverside-applied
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com
serverside-applied
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com
serverside-applied
namespace/monitoring serverside-applied

% kubectl get crd


NAME CREATED AT
alertmanagerconfigs.monitoring.coreos.com 2022-03-09T09:16:24Z
alertmanagers.monitoring.coreos.com 2022-03-09T09:16:25Z
backups.velero.io 2022-02-18T11:12:17Z
backupstoragelocations.velero.io 2022-02-18T11:12:17Z
cnsvolumeoperationrequests.cns.vmware.com 2022-02-10T11:38:35Z
csinodetopologies.cns.vmware.com 2022-02-10T11:38:55Z
deletebackuprequests.velero.io 2022-02-18T11:12:17Z
downloadrequests.velero.io 2022-02-18T11:12:17Z
podmonitors.monitoring.coreos.com 2022-03-09T09:16:25Z
podvolumebackups.velero.io 2022-02-18T11:12:17Z
podvolumerestores.velero.io 2022-02-18T11:12:17Z
probes.monitoring.coreos.com 2022-03-09T09:16:25Z
prometheuses.monitoring.coreos.com 2022-03-09T09:16:26Z
prometheusrules.monitoring.coreos.com 2022-03-09T09:16:27Z
resticrepositories.velero.io 2022-02-18T11:12:17Z
restores.velero.io 2022-02-18T11:12:17Z
schedules.velero.io 2022-02-18T11:12:18Z
serverstatusrequests.velero.io 2022-02-18T11:12:18Z
servicemonitors.monitoring.coreos.com 2022-03-09T09:16:27Z
thanosrulers.monitoring.coreos.com 2022-03-09T09:16:27Z
volumesnapshotclasses.snapshot.storage.k8s.io 2022-02-10T11:48:15Z
volumesnapshotcontents.snapshot.storage.k8s.io 2022-02-10T11:48:16Z
volumesnapshotlocations.velero.io 2022-02-18T11:12:18Z
volumesnapshots.snapshot.storage.k8s.io 2022-02-10T11:48:17Z

b Deploy and verify the Prometheus stack objects.

% kubectl apply -f manifests/


alertmanager.monitoring.coreos.com/main created

VMware by Broadcom 138


Getting Started with VMware vSphere Container Storage Plug-in

poddisruptionbudget.policy/alertmanager-main created
prometheusrule.monitoring.coreos.com/alertmanager-main-rules created
secret/alertmanager-main created
service/alertmanager-main created
serviceaccount/alertmanager-main created
servicemonitor.monitoring.coreos.com/alertmanager-main created
clusterrole.rbac.authorization.k8s.io/blackbox-exporter created
clusterrolebinding.rbac.authorization.k8s.io/blackbox-exporter created
configmap/blackbox-exporter-configuration created
deployment.apps/blackbox-exporter created
service/blackbox-exporter created
serviceaccount/blackbox-exporter created
servicemonitor.monitoring.coreos.com/blackbox-exporter created
secret/grafana-config created
secret/grafana-datasources created
configmap/grafana-dashboard-alertmanager-overview created
configmap/grafana-dashboard-apiserver created
configmap/grafana-dashboard-cluster-total created
configmap/grafana-dashboard-controller-manager created
configmap/grafana-dashboard-grafana-overview created
configmap/grafana-dashboard-k8s-resources-cluster created
configmap/grafana-dashboard-k8s-resources-namespace created
configmap/grafana-dashboard-k8s-resources-node created
configmap/grafana-dashboard-k8s-resources-pod created
configmap/grafana-dashboard-k8s-resources-workload created
configmap/grafana-dashboard-k8s-resources-workloads-namespace created
configmap/grafana-dashboard-kubelet created
configmap/grafana-dashboard-namespace-by-pod created
configmap/grafana-dashboard-namespace-by-workload created
configmap/grafana-dashboard-node-cluster-rsrc-use created
configmap/grafana-dashboard-node-rsrc-use created
configmap/grafana-dashboard-nodes created
configmap/grafana-dashboard-persistentvolumesusage created
configmap/grafana-dashboard-pod-total created
configmap/grafana-dashboard-prometheus-remote-write created
configmap/grafana-dashboard-prometheus created
configmap/grafana-dashboard-proxy created
configmap/grafana-dashboard-scheduler created
configmap/grafana-dashboard-workload-total created
configmap/grafana-dashboards created
deployment.apps/grafana created
prometheusrule.monitoring.coreos.com/grafana-rules created
service/grafana created
serviceaccount/grafana created
servicemonitor.monitoring.coreos.com/grafana created
prometheusrule.monitoring.coreos.com/kube-prometheus-rules created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
prometheusrule.monitoring.coreos.com/kube-state-metrics-rules created
service/kube-state-metrics created
serviceaccount/kube-state-metrics created
servicemonitor.monitoring.coreos.com/kube-state-metrics created
prometheusrule.monitoring.coreos.com/kubernetes-monitoring-rules created
servicemonitor.monitoring.coreos.com/kube-apiserver created

VMware by Broadcom 139


Getting Started with VMware vSphere Container Storage Plug-in

servicemonitor.monitoring.coreos.com/coredns created
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created
clusterrole.rbac.authorization.k8s.io/node-exporter created
clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
daemonset.apps/node-exporter created
prometheusrule.monitoring.coreos.com/node-exporter-rules created
service/node-exporter created
serviceaccount/node-exporter created
servicemonitor.monitoring.coreos.com/node-exporter created
clusterrole.rbac.authorization.k8s.io/prometheus-k8s created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created
poddisruptionbudget.policy/prometheus-k8s created
prometheus.monitoring.coreos.com/k8s created
prometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s-config created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
service/prometheus-k8s created
serviceaccount/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus-k8s created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
clusterrole.rbac.authorization.k8s.io/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator
created
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created
configmap/adapter-config created
deployment.apps/prometheus-adapter created
poddisruptionbudget.policy/prometheus-adapter created
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created
service/prometheus-adapter created
serviceaccount/prometheus-adapter created
servicemonitor.monitoring.coreos.com/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
prometheusrule.monitoring.coreos.com/prometheus-operator-rules created
service/prometheus-operator created
serviceaccount/prometheus-operator created
servicemonitor.monitoring.coreos.com/prometheus-operator created

% kubectl get servicemonitors -A


NAMESPACE NAME AGE
monitoring alertmanager-main 46s
monitoring blackbox-exporter 45s
monitoring coredns 25s

VMware by Broadcom 140


Getting Started with VMware vSphere Container Storage Plug-in

monitoring grafana 27s


monitoring kube-apiserver 25s
monitoring kube-controller-manager 24s
monitoring kube-scheduler 24s
monitoring kube-state-metrics 26s
monitoring kubelet 24s
monitoring node-exporter 23s
monitoring prometheus-adapter 17s
monitoring prometheus-k8s 20s
monitoring prometheus-operator 16s

% kubectl get pod -n monitoring


NAME READY STATUS RESTARTS AGE
alertmanager-main-0 2/2 Running 0 95s
alertmanager-main-1 2/2 Running 0 95s
alertmanager-main-2 2/2 Running 0 95s
blackbox-exporter-7d89b9b799-svr4t 3/3 Running 0 2m5s
grafana-5577bc8799-b5bnd 1/1 Running 0 107s
kube-state-metrics-d5754d6dc-spx4w 3/3 Running 0 106s
node-exporter-8b44z 2/2 Running 0 103s
node-exporter-jrxrc 2/2 Running 0 103s
node-exporter-pj7nb 2/2 Running 0 103s
prometheus-adapter-6998fcc6b5-dlqk6 1/1 Running 0 97s
prometheus-adapter-6998fcc6b5-qswk4 1/1 Running 0 97s
prometheus-k8s-0 2/2 Running 0 94s
prometheus-k8s-1 2/2 Running 0 94s
prometheus-operator-59647c66cf-ldppj 2/2 Running 0 96s

% kubectl get svc -n monitoring


NAME TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
alertmanager-main ClusterIP 10.96.161.166 <none> 9093/
TCP,8080/TCP 7m18s
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/
TCP,9094/UDP 6m47s
blackbox-exporter ClusterIP 10.104.28.233 <none> 9115/
TCP,19115/TCP 7m17s
grafana ClusterIP 10.97.77.202 <none>
3000/TCP 7m
kube-state-metrics ClusterIP None <none> 8443/
TCP,9443/TCP 6m58s
node-exporter ClusterIP None <none>
9100/TCP 6m55s
prometheus-adapter ClusterIP 10.104.10.57 <none>
443/TCP 6m50s
prometheus-k8s ClusterIP 10.99.185.136 <none> 9090/
TCP,8080/TCP 6m53s
prometheus-operated ClusterIP None <none>
9090/TCP 6m46s
prometheus-operator ClusterIP None <none> 8443/TCP

VMware by Broadcom 141


Getting Started with VMware vSphere Container Storage Plug-in

3 Adjust the ClusterRole prometheus-k8s.

When deployed through kube-prometheus, the ClusterRole prometheus-k8s does not have
the necessary apiGroup resources and verbs rules to pick up metrics of vSphere Container
Storage Plug-in. You must modify the ClusterRole with the necessary rules.
a Display the ClusterRole after it was first created.

% kubectl get ClusterRole prometheus-k8s -o yaml


apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/
v1","kind":"ClusterRole","metadata":{"annotations":{},"labels":{"app.kubernetes.io/
component":"prometheus","app.kubernetes.io/instance":"k8s","app.kubernetes.io/
name":"prometheus","app.kubernetes.io/part-of":"kube-prometheus","app.kubernetes.io/
version":"2.33.4"},"name":"prometheus-k8s"},"rules":[{"apiGroups":[""],"resources":
["nodes/metrics"],"verbs":["get"]},{"nonResourceURLs":["/metrics"],"verbs":["get"]}]}
creationTimestamp: "2022-03-09T09:19:39Z"
labels:
app.kubernetes.io/component: prometheus
app.kubernetes.io/instance: k8s
app.kubernetes.io/name: prometheus
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 2.33.4
name: prometheus-k8s
resourceVersion: "7283142"
uid: e18f021c-3e6e-4162-98ca-bbf912b75b06
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- nonResourceURLs:
- /metrics
verbs:
- get

b Update the apiGroup resources and verbs rules of the manifest.

% cat prometheus-clusterRole-updated.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: prometheus
app.kubernetes.io/instance: k8s
app.kubernetes.io/name: prometheus
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 2.33.0
name: prometheus-k8s

VMware by Broadcom 142


Getting Started with VMware vSphere Container Storage Plug-in

rules:
- apiGroups:
- ""
resources:
- nodes
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- nonResourceURLs:
- /metrics
verbs:
- get
% kubectl apply -f prometheus-clusterRole-updated.yaml
clusterrole.rbac.authorization.k8s.io/prometheus-k8s configured

c Display the updated ClusterRole.

% kubectl get ClusterRole prometheus-k8s -o yaml


apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/
v1","kind":"ClusterRole","metadata":{"annotations":{},"labels":{"app.kubernetes.io/
component":"prometheus","app.kubernetes.io/instance":"k8s","app.kubernetes.io/
name":"prometheus","app.kubernetes.io/part-of":"kube-prometheus","app.kubernetes.io/
version":"2.33.0"},"name":"prometheus-k8s"},"rules":[{"apiGroups":[""],"resources":
["nodes","services","endpoints","pods"],"verbs":["get","list","watch"]},
{"nonResourceURLs":["/metrics"],"verbs":["get"]}]}
creationTimestamp: "2022-03-09T09:19:39Z"
labels:
app.kubernetes.io/component: prometheus
app.kubernetes.io/instance: k8s
app.kubernetes.io/name: prometheus
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 2.33.0
name: prometheus-k8s
resourceVersion: "7284231"
uid: e18f021c-3e6e-4162-98ca-bbf912b75b06
rules:
- apiGroups:
- ""
resources:
- nodes
- services
- endpoints
- pods
verbs:
- get
- list
- watch

VMware by Broadcom 143


Getting Started with VMware vSphere Container Storage Plug-in

- nonResourceURLs:
- /metrics
verbs:
- get

VMware by Broadcom 144


Getting Started with VMware vSphere Container Storage Plug-in

4 Create Service Monitor.

You must create a ServiceMonitor object to monitor any service, such as vSphere Container
Storage Plug-in, through Prometheus.
a Create the manifest and deploy the ServiceMonitor object.

The object will be used to monitor the vsphere-csi-controller service. The endpoints
refer to ports 2112 (ctlr) and 2113 (syncer).

% cat vsphere-csi-controller-service-monitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: vsphere-csi-controller-prometheus-servicemonitor
namespace: monitoring
labels:
name: vsphere-csi-controller-prometheus-servicemonitor
spec:
selector:
matchLabels:
app: vsphere-csi-controller
namespaceSelector:
matchNames:
- vmware-system-csi
endpoints:
- port: ctlr
- port: syncer

% kubectl apply -f vsphere-csi-controller-service-monitor.yaml


servicemonitor.monitoring.coreos.com/vsphere-csi-controller-prometheus-servicemonitor
created

b Verify that ServiceMonitors are running.

% kubectl get servicemonitors -A


NAMESPACE NAME AGE
monitoring alertmanager-main 9m32s
monitoring blackbox-exporter 9m31s
monitoring coredns 9m11s
monitoring grafana 9m13s
monitoring kube-apiserver 9m11s
monitoring kube-controller-manager 9m10s
monitoring kube-scheduler 9m10s
monitoring kube-state-metrics 9m12s
monitoring kubelet 9m10s
monitoring node-exporter 9m9s

VMware by Broadcom 145


Getting Started with VMware vSphere Container Storage Plug-in

monitoring prometheus-adapter 9m3s


monitoring prometheus-k8s 9m6s
monitoring prometheus-operator 9m2s
monitoring vsphere-csi-controller-prometheus-servicemonitor 42s

c Check the logs on the prometheus-k8s-* nodes in the monitoring namespace.

The logs list any potential issues with scraping metrics from vSphere Container Storage
Plug-in. For example, if you have not correctly updated the ClusterRole, you can observe
errors similar to this:

ts=2022-03-07T15:15:06.580Z caller=klog.go:116 level=error


component=k8s_client_runtime
func=ErrorDepth msg="pkg/mod/k8s.io/client-go@v0.22.4/tools/cache/
reflector.go:167:
Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints is forbidden:
User \"system:serviceaccount:monitoring:prometheus-k8s\" cannot list resource
\"endpoints\"
in API group \"\" in the namespace \"vmware-system-csi\""

Launch Prometheus UI
Access Prometheus UI and view the vSphere Container Storage Plug-in metrics that Prometheus
collects.

Procedure

1 Make Prometheus server UI accessible.

By default, a Prometheus service prometheus-k8s that you deployed in Step 2.b is of


ClusterIP type. This means that it is an internal service and not accessible externally.

% kubectl get svc prometheus-k8s -n monitoring


NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
prometheus-k8s ClusterIP 10.99.185.136 <none> 9090/TCP,8080/TCP 97m

You can use various methods to address this, such as change the service type to NodePort
or LoadBalancer if you have one available to provide LoadBalancer IPs.
For the purposes of this testing, port-forward the service and port (9090), and make it
accessible from local host.

% kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090


Forwarding from 127.0.0.1:9090 -> 9090
Forwarding from [::1]:9090 -> 9090

2 Open a browser on your desktop and connect to http://localhost:9090 to see the


Prometheus UI.

3 View the following metrics.

n vsphere-csi-info exposed by vsphere-csi-controller container from port 2112.

VMware by Broadcom 146


Getting Started with VMware vSphere Container Storage Plug-in

n vsphere-syncer-info exposed by vsphere-syncer container from port 2113.

Create Grafana Dashboard


Launch the Grafana portal and create a dashboard to display metrics of vSphere Container
Storage Plug-in.

Procedure

1 Make Grafana UI accessible.

As with Prometheus, Grafana is deployed as ClusterIP and is not accessible externally.

Use the port-forward functionality to access Grafana from a browser on the local host. This
time the port is 3000.

% kubectl get svc grafana -n monitoring


NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.97.77.202 <none> 3000/TCP 98m

% kubectl --namespace monitoring port-forward svc/grafana 3000


Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000

2 Access the Grafana UI through http://localhost:3000.

3 In Grafana UI, set up the dashboard for vSphere Container Storage Plug-in.

You can import sample dashboards from a GitHub location at https://github.com/kubernetes-


sigs/vsphere-csi-driver/tree/master/grafana-dashboard.
For information on how to import a Grafana dashboard, see Grafana documentation at Export
and import.

VMware by Broadcom 147


Getting Started with VMware vSphere Container Storage Plug-in

4 Review the Grafana dashboard.

The dashboard similar to the following displays the vSphere Container Storage Plug-in
metrics that have been scraped and stored by Prometheus.

Set Up a Prometheus Alert


You can define an alert for vsphere-csi-controller to notify you when something is wrong.

Procedure

1 Verify that an alert manager has been deployed.

The alert manager is normally deployed in addition to other services when you deploy kube-
prometheus.

% kubectl get service -n monitoring


NAME TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
alertmanager-main ClusterIP 10.102.4.82 <none> 9093/
TCP,8080/TCP 19h
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/
TCP,9094/UDP 19h
blackbox-exporter ClusterIP 10.98.242.67 <none> 9115/
TCP,19115/TCP 19h
grafana NodePort 10.102.89.156 <none>
3000:31926/TCP 19h
kube-state-metrics ClusterIP None <none> 8443/
TCP,9443/TCP 19h
node-exporter ClusterIP None <none>
9100/TCP 19h
prometheus-adapter ClusterIP 10.98.39.123 <none>
443/TCP 19h
prometheus-k8s NodePort 10.105.37.241 <none> 9090:30091/
TCP,8080:32536/TCP 19h
prometheus-operated ClusterIP None <none>
9090/TCP 19h
prometheus-operator ClusterIP None <none>
8443/TCP 19h

VMware by Broadcom 148


Getting Started with VMware vSphere Container Storage Plug-in

% kubectl get pod -n monitoring


NAME READY STATUS RESTARTS AGE
alertmanager-main-0 2/2 Running 0 19h
alertmanager-main-1 2/2 Running 0 19h
alertmanager-main-2 2/2 Running 0 19h
blackbox-exporter-55bb7b586b-dpfxb 3/3 Running 0 19h
grafana-7474bdd9cb-5mklg 1/1 Running 0 19h
kube-state-metrics-c655879df-vg8qc 3/3 Running 0 19h
node-exporter-2bk6j 2/2 Running 0 19h
node-exporter-fmzhq 2/2 Running 0 19h
node-exporter-n446g 2/2 Running 0 19h
node-exporter-q4qk2 2/2 Running 0 19h
node-exporter-zjkrf 2/2 Running 0 19h
node-exporter-ztvmm 2/2 Running 0 19h
prometheus-adapter-6b59dfc556-ks2c9 1/1 Running 0 19h
prometheus-adapter-6b59dfc556-m9rpb 1/1 Running 0 19h
prometheus-k8s-0 2/2 Running 0 19h
prometheus-k8s-1 2/2 Running 0 19h
prometheus-operator-7b997546f8-pzs8t 2/2 Running 0 19h

2 Define an alert for vsphere-csi-controller.

To do this, you need to specify PrometheusRule for vsphere-csi-controller.


The following example shows an alert when the CreateVolume success rate is lower than
95%.

% /kube-prometheus/manifests# cat vsphereCSIController-prometheusRule.yaml


apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
app.kubernetes.io/component: vsphere-csi-controller
app.kubernetes.io/name: vsphere-csi-controller
app.kubernetes.io/part-of: kube-prometheus
prometheus: k8s
role: alert-rules
name: vsphere-csi-controller-rules
namespace: monitoring
spec:
groups:
- name: vsphere.csi.controller.rules
rules:
- alert: CreateVolumeSuccessRateLow
annotations:
description: Success rate of CSI volume OP "create-volume" is lower than 95% in
last 6 hours.
summary: Success rate of CSI volume OP "create-volume" is lower than 95% in last 6
hours.
expr: sum(rate(vsphere_csi_volume_ops_histogram_count{status="pass", optype="create-
volume"}[6h]))/sum (rate(vsphere_csi_volume_ops_histogram_count{optype="create-volume"}
[6h]))*100 < 95
for: 5m

VMware by Broadcom 149


Getting Started with VMware vSphere Container Storage Plug-in

labels:
issue: Success rate of CSI volume OP "create-volume" is lower than 95% in last 6
hours.
severity: warning

Collect vSphere Container Storage Plug-in Logs


Use the kubectl logs to collect logs for vSphere Container Storage Plug-in pods. You can
then analyze the logs to diagnose problems with your vSphere Container Storage Plug-in
environment.

Procedure

u Collect logs for vSphere Container Storage Plug-in pods by using the following command.

kubectl logs -f pod-name -c container-name -n namespace

The command takes these options.

Option Description

-f pod-name The name of the vSphere Container Storage Plug-in pod.

-c container-name The name of the container in the vSphere Container Storage Plug-in pod.

-n namespace The location where you deploy vSphere Container Storage Plug-in. Default is
vmware-system-csi.

Example: Command Examples


To collect vSphere CSI controller logs use the following command:

kubectl logs vsphere-csi-controller-suffix -c vsphere-csi-controller -n vmware-system-csi

To collect vSphere CSI syncer logs, use the following command:

kubectl logs vsphere-csi-controller-suffix -c vsphere-syncer -n vmware-system-csi

To collect vSphere CSI Node Daemonset Pod logs, use the following command:

kubectl logs vsphere-csi-node-suffix -c vsphere-csi-node -n vmware-system-csi

Note In the production environment, vSphere Container Storage Plug-in runs with multiple
replicas. It is recommended to collect logs from each replica of the pod. It is also important to
collect logs of all containers from all replicas of the vSphere Container Storage Plug-in pod to
perform root cause analysis in case you find any issues.

VMware by Broadcom 150

You might also like