You are on page 1of 6

Statefulset

https://www.velotio.com/engineering-blog/exploring-upgrade-strategies-
for-stateful-sets-in-kubernetes
StatefulSet pods have a unique identity that is comprised of an ordinal, a
stable network identity, and stable storage. The identity sticks to the pod,
regardless of which node it’s scheduled on. 
Update Strategies FOR STATEFULSETS
OnDelete update strategy
OnDelete prevents the controller from automatically updating its pods.
One needs to delete the pod manually for the changes to take effect.
OnDelete update strategy plays an important role where the user needs
to perform few action/verification post the update of each pod. 
Rolling update strategy
Rolling update is an automated update process. In this, the controller
deletes and then recreates each of its pods. Pods get updated one at a
time. While updating, the controller makes sure that an updated pod is
running and is in ready state before updating its predecessor. The pods
in the StatefulSet are updated in reverse ordinal order(same as pod
termination order i.e from the largest ordinal to the smallest)
Partitioning a RollingUpdate (Staging an Update)
The updateStrategy contains one more field for partitioning the
RollingUpdate. If a partition is specified, all pods with an ordinal greater
than or equal to that of the provided partition will be updated and the
pods with an ordinal that is less than the partition will not be updated. If
the pods with an ordinal value less than the partition get deleted, then
those pods will get recreated with the old definition/version. This
partitioning rolling update feature plays important role in the scenario
where if you want to stage an update, roll out a canary, or perform a
phased rollout.
kubectl patch statefulset cassandra -p '{"spec":{"updateStrategy":
{"type":"RollingUpdate","rollingUpdate":{"partition":2}}}}'
Verifying if the upgrade was successful
1 In OnDelete update strategy, you can keep updating pod one by
one and keep checking the application status to make sure the
upgrade working fine.
2 In RollingUpdate strategy, you can check the application status
once all the running pods of your application gets upgraded.
For Cassandra like application, OnDelete update is more preferred than
RollingUpdate. In rolling update, we saw that Cassandra pod gets
updated one by one, starting from high to low ordinal index. There might
be the case where after updating 2 pods, Cassandra cluster might go in
failed state but you can not recover it like the OnDelete strategy. You
have to try to recover Cassandra once the complete upgrade is done i.e.
once all the pods get upgraded to provided image. If you have to use the
rolling update then try partitioning the rolling update.

https://sematext.com/blog/kubernetes-elasticsearch/
 Every instance of Elasticsearch running in the cluster is called a node.
In Kubernetes an Elasticsearch node would be equivalent to an
Elasticsearch Pod. Don’t get it confused with a Kubernetes Node, which
is one of the virtual machines Kubernetes is running on.
Master Pods are responsible for managing the cluster,
managing indices, and electing a new master if needed. Data
Pods are dedicated to store data, while client Pods have no
role whatsoever except for funneling incoming traffic to the rest
of the Pods.
Data Pods are deployed as StatefulSets with PersistentVolumes and
PersistentVolumeClaims. They will persist data between restarts, which
is what you want.
Master Pods can be deployed as either Deployments or StatefulSets.
A headless service for each StatefulSet is created and used for inter-
cluster discovery.
Client Pods are completely stateless and can be deployed as a simple
Kubernetes Deployment.
A Kubernetes LoadBalancer Service is used to forward inbound traffic to
the client Pods. All of your apps, as well as Kibana will be configured to
go through the LoadBalancer service.
Memory Requirements
If you are setting up an Elasticsearch cluster on Kubernetes for yourself,
keep in mind to allocate at least 4GB of memory to your Kubernetes
Nodes. You will need at least 7 Nodes to run this setup without any
hiccups. The default size of the PersistentVolumeClaims for each
Elasticsearch Pod will be 30GB. This will help determine how much
block storage you will need.

Deploying 7 pod elasticsearch cluster


# master.yaml
---
clusterName: "elasticsearch"
nodeGroup: "master"
roles:
master: "true"
ingest: "false"
data: "false"
replicas: 3

# data,.yaml
---
clusterName: "elasticsearch"
nodeGroup: "data"
roles:
master: "false"
ingest: "true"
data: "true"
replicas: 2

# client.yaml
---
clusterName: "elasticsearch"
nodeGroup: "client"
roles:
master: "false"
ingest: "false"
data: "false"
replicas: 2
service:
type: "LoadBalancer"
Elasticsearch implementation
podManagementPolicy: "Parallel"
The default is to deploy all pods serially. By setting this to parallel
all pods are started at the same time when bootstrapping the cluster
updateStrategy: RollingUpdate
volumeClaimTemplates
StatefulSet provides a key named as volumeClaimTemplates .
With that, you can request the PVC from the storage class dynamically.
As part of your new statefulset app definition, replace the volumes with
volumeClaimTemplates
volumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 30Gi

roles:
- master
- data
- data_content
- data_hot
- data_warm
- data_cold
- ingest
- ml
- remote_cluster_client
- transform

replicas: 3
minimumMasterNodes: 2

resources:
requests:
cpu: "1000m"
memory: "2Gi"
limits:
cpu: "1000m"
memory: "2Gi"

networkHost: "0.0.0.0"

You might also like