You are on page 1of 6

SRINI VASAN5:17 PM

Oh sorry done
Badri R5:25 PM
yes
vamsi krishna5:25 PM
yes
Kalaiarasan Kirubanandam5:25 PM
I have a few doubts
sathyaraj k5:25 PM
yes
Kalaiarasan Kirubanandam5:25 PM
can we attach the roles to user group(s)?
can we attach the role(s) to user(s)?
can we attach the k8s objects assigned to roles - roles assigned to group and group
assinged to networkgroup?
Badri R5:30 PM
clear.
Thulasiraja m6:10 PM
Yes
Kalaiarasan Kirubanandam6:10 PM
s
Saju Krishnan6:10 PM
yes
vaithiy anathan6:10 PM
yes
Mani kandan6:32 PM
looks like my vm got corrupted need to check on john
$ script/vagrant/sloopstash-k8s-mtr/up
Vagrant failed to initialize at a very early stage:

The machine index which stores all required information about


running Vagrant environments has become corrupt. This is usually
caused by external tampering of the Vagrant data folder.

Vagrant cannot manage any Vagrant environments if the index is


corrupt. Please attempt to manually correct it. If you are unable
to manually correct it, then remove the data file at the path below.
This will leave all exist
Saju Krishnan6:36 PM
yes
archana sekar6:36 PM
yes
Tajdeen Abdulla6:36 PM
Yes
sathyaraj k6:36 PM
Yes
Jaicy Vignesh6:36 PM
s
harvind kumar6:36 PM
yes
Sathishkumar M6:36 PM
es
somasundaram selvamuthukumar6:36 PM
yeas
Aanchal Mathew6:36 PM
yes
Kolli Santhosh6:36 PM
yes
Sathiyaraj Sridhar6:40 PM
vim ~/.kube/config
Sathiyaraj Sridhar6:44 PM
cd /opt/kickstart-kubernetes
kubectl apply -f role/devops.yml -n crm-stg
kubectl apply -f role/developer.yml -n crm-stg
kubectl apply -f role/qa.yml -n crm-stg
Sathiyaraj Sridhar6:45 PM
kubectl get roles -o wide --show-labels=true -n crm-stg
Vishal S6:45 PM
s
sathyaraj k6:45 PM
Yes
Arunkumar Ponnurangam6:45 PM
yes
Kolli Santhosh6:46 PM
yes
Saju Krishnan6:46 PM
yes
BANDITA GARNAIK6:46 PM
yes
archana sekar6:46 PM
done
Jaicy Vignesh6:46 PM
s
John Sundarraj6:47 PM
Windows host machine users open new Git Bash terminal
Ubuntu host machine users open new terminal.
Rajkumar R T6:48 PM
yes
Saju Krishnan6:48 PM
yes
sathyaraj k6:48 PM
Yes
Rajkumar R T6:48 PM
vim ~/.kube/config
archana sekar6:49 PM
able to see
Kalaiarasan Kirubanandam6:49 PM
s
Azharudeen AshrafAli6:49 PM
k
Vishal S6:49 PM
s
BANDITA GARNAIK6:49 PM
yes
Sathiyaraj Sridhar6:50 PM
kubectl get pods -o wide --show-labels=true -n crm-stg
Saju Krishnan6:50 PM
yes
BANDITA GARNAIK6:50 PM
yes
Kalaiarasan Kirubanandam6:50 PM
yes
Jaicy Vignesh6:50 PM
S
Kolli Santhosh6:50 PM
yes
Vishal S6:51 PM
s
Rajkumar R T6:51 PM
bash: kubectl: command not found
Sathiyaraj Sridhar6:53 PM
kubectl apply -f role/qa.yml -n crm-stg
kubectl get rolebindings -o wide --show-labels=true -n crm-stg
vaithiy anathan6:55 PM
yes
harvind kumar6:55 PM
yes
Tajdeen Abdulla6:55 PM
Yes
Kolli Santhosh6:55 PM
yes
Sathishkumar M6:55 PM
yes
Aanchal Mathew6:55 PM
yes
Sathiyaraj Sridhar6:56 PM
kubectl get services -o wide --show-labels=true -n crm-stg
John Sundarraj6:57 PM
kubectl delete -f service/app.yml -n crm-stg
harvind kumar6:57 PM
yes
Sathishkumar M6:57 PM
yes
Tajdeen Abdulla6:57 PM
Yes
John Sundarraj6:58 PM
kubectl apply -f role/binding/devops.yml -n crm-stg
vaithiy anathan6:59 PM
yes
Kavi Arasu6:59 PM
yes
Sathishkumar M6:59 PM
yes
Kalaiarasan Kirubanandam6:59 PM
s
vamsi krishna6:59 PM
yes
Saju Krishnan7:00 PM
yes
John Sundarraj7:01 PM
kubectl delete -f role/binding/devops.yml -n crm-stg
John Sundarraj7:03 PM
Will be back in 10 mins after break.
Mani kandan7:18 PM
Sudharshan_Pranav@DESKTOP-V1GLSSN MINGW64 /opt/kickstart-kubernetes (master)
$ script/vagrant/sloopstash-k8s-mtr/destroy
Vagrant failed to initialize at a very early stage:

The machine index which stores all required information about


running Vagrant environments has become corrupt. This is usually
caused by external tampering of the Vagrant data folder.

Vagrant cannot manage any Vagrant environments if the index is


corrupt. Please attempt to manually correct it. If you are unable
to manually
Sathiyaraj Sridhar7:20 PM
@Manikandan: just deleted the index file and the .lock file from the machine-index
folder
Sathiyaraj Sridhar7:21 PM
@Manikandan: And then up the VM.
Sathiyaraj Sridhar7:43 PM
kubectl get nodes -o wide --show-labels=false
kubectl apply -f cluster-role/qa.yml
kubectl get clusterroles -o wide --show-labels=true
Sathiyaraj Sridhar7:46 PM
kubectl apply -f cluster-role/binding/qa.yml
kubectl get clusterrolebindings -o wide --show-labels=true
Sathishkumar M7:47 PM
yes
Kolli Santhosh7:47 PM
yes
Aanchal Mathew7:47 PM
yes
harvind kumar7:47 PM
yes
Tajdeen Abdulla7:47 PM
Yes
Kalaiarasan Kirubanandam7:47 PM
s
karthik s7:47 PM
Working
vaithiy anathan7:47 PM
yes
archana sekar7:48 PM
yes
sathyaraj k7:48 PM
Yes
Vishal S7:48 PM
s
Sathiyaraj Sridhar7:49 PM
kubectl drain sloopstash-k8s-wkr --delete-local-data --force --ignore-daemonsets
kubectl apply -f cluster-role/devops.yml
Sathiyaraj Sridhar7:51 PM
kubectl get clusterroles -o wide --show-labels=true | grep devops
kubectl apply -f cluster-role/binding/devops.yml
Sathiyaraj Sridhar7:52 PM
kubectl drain sloopstash-k8s-wkr --delete-local-data --force --ignore-daemonsets
sathyaraj k7:55 PM
yes
Jaicy Vignesh7:55 PM
yes John
Kalaiarasan Kirubanandam7:55 PM
s
Shivakanth Chandrasekaran7:55 PM
s
Sathishkumar M7:55 PM
yes
Sathiyaraj Sridhar7:57 PM
kubectl drain sloopstash-k8s-wkr --delete-local-data --force --ignore-daemonsets
Sathiyaraj Sridhar7:58 PM
sudo docker container ls
kubectl drain sloopstash-k8s-mtr --delete-local-data --force --ignore-daemonsets
Saju Krishnan8:02 PM
do we also need to delete the worker node...?
Sathiyaraj Sridhar8:02 PM
kubectl get nodes -o wide --show-labels=false
vaithiy anathan8:03 PM
yes
sathyaraj k8:03 PM
yes
Kalaiarasan Kirubanandam8:03 PM
in real time, when we will use for drain?
Sathishkumar M8:03 PM
yes
Kavi Arasu8:03 PM
yes
BANDITA GARNAIK8:03 PM
yes
Saju Krishnan8:03 PM
yes
harvind kumar8:03 PM
yes
Swati Anpat8:03 PM
yes
Sathiyaraj Sridhar8:05 PM
journalctl -f -u kubelet.service
Kalaiarasan Kirubanandam8:06 PM
tq
Sathiyaraj Sridhar8:07 PM
kubectl delete node sloopstash-k8s-wkr
vaithiy anathan8:07 PM
yes
ganesh8:07 PM
yes
Rajkumar R T8:07 PM
s
Kalaiarasan Kirubanandam8:07 PM
s
Sathishkumar M8:07 PM
yes
Sathiyaraj Sridhar8:08 PM
sudo kubeadm reset
Sathiyaraj Sridhar8:11 PM
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo
iptables -X
kubectl get nodes -o wide --show-labels=false
kubectl delete node sloopstash-k8s-mtr
sudo docker container ls
Sathiyaraj Sridhar8:12 PM
sudo kubeadm reset
Sathiyaraj Sridhar8:13 PM
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo
iptables -X
Rajkumar R T8:14 PM
yes
Aanchal Mathew8:14 PM
done
vaithiy anathan8:14 PM
yes
sudha TM8:14 PM
yes
Kavi Arasu8:14 PM
yes
Saju Krishnan8:14 PM
yes
Kalaiarasan Kirubanandam8:14 PM
s
Vishal S8:14 PM
s
ganesh8:14 PM
yes
sathyaraj k8:14 PM
Yes
vamsi krishna8:14 PM
yes
Azharudeen AshrafAli8:26 PM
These topics when we are planned to cover>> kubectl rollout
>> node affinity
>> stateful set - for db service
Kalaiarasan Kirubanandam8:51 PM
thanks John.. Recall the Docker,Swarm and K8S
Muthu Kumaran8:51 PM
Also aoutoscaling conepts and renewing certificate and daemon sets
These are some common interview questions
Kubes.aws8:59 PM
Thanks John and Satyaraj!

You might also like