Professional Documents
Culture Documents
The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Kubernetes and Cloud Native Associate
(KCNA) Study Guide, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc.
The views expressed in this work are those of the authors and do not represent the publisher’s views.
While the publisher and the authors have used good faith efforts to ensure that the information and
instructions contained in this work are accurate, the publisher and the authors disclaim all responsibility
for errors or omissions, including without limitation responsibility for damages resulting from the use
of or reliance on this work. Use of the information and instructions contained in this work is at your
own risk. If any code samples or other technology this work contains or describes is subject to open
source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use
thereof complies with such licenses and/or rights.
978-1-098-13888-2
[FILL IN]
Table of Contents
1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
The Book Structure 12
Intended Book Audience 14
What To Expect From This Study Guide 17
Why Cloud Native and Why Now 17
Inspirational Success Stories 20
Self-questionnaire for New Cloud Native Practitioners 22
Test 1 - The CNCF Ecosystem 23
Test 2 - General Cloud Native Concepts 24
Test 3 - Kubernetes Topics 26
Test 4 - Kubernetes Commands 27
Test 5 - Linux Fundamentals 29
Test 6 - Other Related Projects 30
PLACEHOLDER Expert Insights 1 - Walid Shaari 32
PLACEHOLDER EXPERT INSIGHTS 33
PLACEHOLDER CONCLUSION - Chapter 1 40
iii
Additional Areas of Knowledge (outside of the official curriculum) 53
Part 6 - Kubernetes 101 Hands-on 53
Part 7 - Linux Fundamentals 54
Part 8 - API-enabled Development 54
Part 9 - Telematics 55
Part 10 - Web Development 55
Part 11 - Applied Usage 55
Exam Study Plan 56
Step 1 - Determine your Current Level of Knowledge 57
Step 2 - Prepare and Adopt an Study Schedule 57
Step 3 - Focus on the Right Material 57
Exam Logistics and What to Expect 58
Step 1 - Schedule Your Exam 58
Step 2 - Exam Day Preparation 61
Step 3 - Post-Exam Logistics 63
Potential Certification Paths 63
PLACEHOLDER CONCLUSION - Chapter 2 67
iv | Table of Contents
Orchestration 104
Relationship between Cloud Computing and Cloud Native 105
Building Cloud Native Applications 106
Other Relevant Cloud Native Topics 108
DevOps and CI/CD 108
GitOps and Infra as Code (IaC) 110
Declarative vs Imperative 110
Stateful vs Stateless 111
Egress and Ingress 111
Serverless 112
Table of Contents | v
Brief Table of Contents (Not Yet Final)
vii
CHAPTER 1
Introduction
KCNA. Interesting choice for a new technology certification acronym. Why? Well,
when you search for this term for the first time via Google or Bing, most of the
results will refer to a national news agency in Asia. But no, nothing to do with that.
The Linux Foundation released the Kubernetes and Cloud Native Associate exam
in 2021, and even if the acronym can be initially a bit confusing, the name is not
at all random. The exam and certification name is a very clear and direct reference
to the two main topics of study, and the starting point for millions of learners and
adopters. This is, the increasingly rich cloud native ecosystem, and the central role of
Kubernetes as key technology enabler.
It makes sense. Containerization is quickly becoming the new normal for big and
small technology companies and developers around the world, and technologies such
as Docker and Kubernetes are “nonofficial” standards for most of the adopters out
there. A powerful movement that is not new. The Cloud Native Computing Founda‐
tion was created in 2015, but companies like Google have leveraged cloud native
9
technologies since the early aughts. That said, it is a complex area of highly technical
knowledge that requires a lot of practice, effort and upskilling. But it also requires
the creation of new learning and training resources, easy-to-read documentation, and
more simple ways for professionals to learn and then adopt cloud native tools. This
exam is part of the equation for that to happen.
But hey, this is not really a hurdle. I still remember the first time I heard about cloud
native. It was during a job interview, and I naively thought that my interviewer meant
something like general cloud computing. We will discuss the difference (and relation)
between these two terms later, but it is clear that I was missing something big. At that
time, I was well versed on cloud technologies because of my data and AI projects,
and I was of course aware of the existence of something called Kubernetes. But this
old school telematics engineer had missed the first years of the container revolution
and his main reference of virtualization techniques were the traditional VMWare-ish
virtual machines. And new communities such as CNCF (Cloud Native Computing
Foundation) were still unknown to me. A lot to catch up with…
However, the promise of something similar to virtual machines (VMs) but even
“better”, lighter, and super scalable was very tempting. In addition to that, my curious
nature and willingness to learn helped me connect the cloud native dots with the
building blocks of the data and AI ecosystem. “Do you mean that I can deploy
resources as new iterations or different versions of AI models, whenever I need it,
in a relatively automated way? Are you telling me that this is how big companies
are doing it today?”. I still remember the words from some friend in the videogames
industry, who was leveraging containers to deploy AI models, several times a day, to
release new features or to just test alternative model versions. Too new, too exciting, I
couldn’t let it go.
Fun was about to start, but I quickly realized that Kubernetes wasn’t a simple topic.
Technical knowledge of its architecture, networking, security, managed options from
hyperscalers, different pieces from the same puzzle… This was the next step for my
engineering degree in Spain, except that no professor explained this to us because it
was just too early. Also, I had lost part of my command line ability, and abandoned
my arduous exploration of Linux-related topics. Years before, I used to stay awake
during the night to try to find a way to replicate in Linux Ubuntu some cool function‐
alities from Windows, participate in forums, try new open-source software, debug
new tools… but most of these things were gone. Instead, I was just a cloud-enabled
adopter looking forward to making sense of Kubernetes, Docker, and other topics.
I had to find a way to start my cloud-native journey (in parallel with a few other cer‐
tification topics I have been “devouring” during the last years), so I took a pragmatic
and kind of lazy approach to avoid the most technical parts, and to focus first on the
cloud-native ecosystem and community. That decision led me to learn more about
CNCF and its actors, the open source governance model, cool projects beyond the
10 | Chapter 1: Introduction
omnipresent Kubernetes, and the levels of maturity and incubation. Such a discovery.
A community full of talent, willingness to teach and collaborate, existing resources,
etc.
Besides that, my research helped me find amazing technical evangelists and industry
experts, a great source of knowledge and an amazing way to learn from the best. I
also tried to understand the history of things. Who created Kubernetes? How did
this tool become so relevant and necessary everywhere? And why? (note: the official
Kubernetes Documentary and its part 2 are a must, and a great option to get some
initial answers). Then, I organically started to explore all technical topics and connect
more dots. It was challenging, but feasible to learn with a good study plan and
learning material. The initial version of the one you will see in this book.
Coming back to the KCNA exam, I remember being very excited when I first saw the
scope and topics that the CNCF folks chose. But why was this exam necessary? Why
would someone want to take it? Well, basically the KCNA is the best to demonstrate
an initial knowledge of Kubernetes and cloud native topics, and a great opportunity
for the community to create content for those who are willing to learn and evolve.
Finally, a fundamentals, associate-level certification that would allow more people to
join the movement, regardless of their previous background.
Up to then, the only available options for learning and certification required very
advanced Kubernetes knowledge. Concretely, the Certified Kubernetes Administrator
(CKA), Certified Kubernetes Security (CKS), Certified Kubernetes Application Devel‐
opment (CKAD). Quite a bit too long walk for people who are still trying to crawl.
But the CNCF and Linux Foundation Certification teams nailed it, and they released
a great exam that, even if it is not the easiest one, helps a lot to bring new folks and
generate great career opportunities for fresh and experienced professionals.
Perfect timing, great opportunity. Some time after the beta exam release, we started
to see more and more cool learning resources from cloud native professionals. Even
myself, I prepared some 101-level KCNA exam prep sessions with O’Reilly Media,
and the Open Source Summit LATAM. I consider myself a pretty good lecturer, but
that was a fairly hard topic, and a difficult one to simplify. My main goal wasn’t to
deliver the perfect session, but to find a teaching approach that would get people
closer to cloud native and Kubernetes, regardless of their professional background
and technical ability. To be honest, I’m a bit biased because I have been teaching big
data and AI for business-oriented professionals for a while. If I could explain those
topics, I had to make it happen for this one!
Some months later, serendipity brought me to THE O’Reilly team working with
Kubernetes-related authors. Concretely, the team that had managed previous book
publications for other Kubernetes Certification topics, including the wonderful study
guides that so many professionals had used before to prepare their CKA / CKAD /
CKS certifications. And I was really willing to put some thought and time into
Introduction | 11
designing something that may help KCNA candidates to 1) learn about Kubernetes
and the cloud native ecosystem and landscape 2) pass the exam and leverage it to
get amazing career opportunities. I wanted to prove that even the most challenging
topics can be explained in a way that will help all sorts of professionals up and
reskilling.
The rest is history. The wonderful O’Reilly team gave me the opportunity to sit and
write the book. A relatively light guide for candidates to understand what to learn,
in a pragmatic and simple way, but also as the starting point for their new cloud
native journey. A pretty difficult task if we keep in mind that I do several things
in my professional life (i.e. working at Microsoft, teaching, international speaking
engagements, etc.). But all this was totally worth it. I like the topic a lot, and I love
teaching and writing content. Being a book author is one of my main personal goals,
and the KCNA is a highly relevant and scalable area of study and work. Let’s see if this
passion can be translated into a great and useful resource for young and experienced
professionals out there. If this book helps just a few of them, I will feel very satisfied.
Same as when I teach to a small group of students. Contributing to create a new
generation of cloud-enabled professionals is very cool. Please enjoy it.
Adrian Gonzalez Sanchez
12 | Chapter 1: Introduction
Chapter 3, CNCF and the New Era of Cloud Native
The third chapter includes a detailed overview of the Cloud Native Computing
Foundation (CNCF), its mission, vision, and contributions to the cloud-native
universe. You will learn how CNCF’s initiatives and projects are redefining com‐
puting, and understand the nuances of cloud-native’s transformative impact on
businesses, from startups to big companies. We will also explore some relevant
technologies beyond Kubernetes.
Chapter 4, Essential Concepts for Cloud Native Practitioners
This fourth chapter is a descriptive exploration of cloud computing and cloud
native foundational concepts. We will dive deep into topics like as-a-Service
models, containerization, microservices, declarative APIs, etc. The idea is to
equip you with additional information for your KCNA exam, but also that you
make sense of the implications of these concepts on software development,
deployment, and operations, and understand their interplay in building robust,
scalable, and resilient systems.
Chapter 5, The Key Role of Kubernetes
Kubernetes (also known as K8s) is more than just a buzzword; it’s an ecosystem
of very relevant technologies. You will understand its origins, its meteoric rise in
the tech world, and why it stands out as the premier container orchestration tool.
The chapter dissects its architecture, the philosophy behind its design principles,
and its pivotal role in the successful implementation of cloud-native strategies.
Chapter 6, Technical Kubernetes Topics
The sixth chapter is where (technical) things get serious. We will deep dive into
the technical details of Kubernetes, from understanding the intricacies of pods,
deployments, and services to unraveling more complex topics like ingress con‐
trollers, network policies, persistent storage, and stateful applications. Through
detailed explanations and visuals, you gain a profound understanding of how
Kubernetes works.
Chapter 7, Final Recommendations
Before taking your KCNA exam, this chapter will equip you with the final set
of tools and some pieces of advice. Benefit from a curated list of resources,
and real-life tips from professionals who have successfully treated this path.
Understand the common pitfalls, how to manage your time during the exam, and
strategies to tackle challenging questions.
Summarizing, this book will cover the step-by-step learning blocks of your KCNA
journey. Let’s now see the typical audience for both the KCNA exam and the Study
Guide.
14 | Chapter 1: Introduction
technical aspects such as commands and architecture elements that may help you
with some specific exam questions. Also, you can leverage interactive learning
elements from O’Reilly, such as Kubernetes and Docker sandboxes to practice
any kind of deployment and configuration, including the kubectl commands we
will explore in Chapter 6.
In terms of roles, you may be just starting off in tech, or an experienced professional
from another technical area, a technical manager or even a leader willing to better
understand these topics. Or maybe you are just looking for new career opportunities.
You can see in Figure 1-1 the typical exam candidate profiles for the KCNA certifica‐
tion.
Regardless of your previous background, motivation is key to reach the actual goal
of this study guide: learning the fundamentals of cloud native, including Kubernetes,
and of course passing the KCNA exam, which is not a proof of knowledge or
experience by itself, but for sure a great first step for anyone willing to join the cloud
native industry, and a powerful portfolio asset for new career opportunities. By the
end of the book we will recommend additional community and learning resources
that will complement your upskilling journey.
Regardless of your current activities, here are some examples of cloud native roles
that can inspire you to continue your cloud native learning path, based on the
potential career options:
Software developers
Responsible for designing, coding and testing the microservices that make up the
cloud native applications. They use tools and languages suitable for the cloud
environment and follow the best practices of DevOps and continuous delivery.
16 | Chapter 1: Introduction
We are just starting, but take some time to reflect on your personal goals, besides the
KCNA exam and certification. What do you want to achieve, what is your envisioned
role, what kind of skills you will require to make it happen, and which percentage of
those is already covered in the Kubernetes and Cloud Native Associate certification.
Next let’s analyze what this Study Guide will realistically bring to your learning and
certification journey, keeping in mind that no book or resource will be a single source
of learning for your upskilling activities.
18 | Chapter 1: Introduction
Oracle
The term cloud native refers to the concept of building and running applications
to take advantage of the distributed computing offered by the cloud delivery
model. Cloud native apps are designed and built to exploit the scale, elasticity,
resiliency, and flexibility the cloud provides.
https://www.oracle.com/cloud/cloud-native/what-is-cloud-native
AWS
Cloud native applications are software programs that consist of multiple small,
interdependent services called microservices. Traditionally, developers built
monolithic applications with a single block structure containing all the required
functionalities. By using the cloud-native approach, software developers break
the functionalities into smaller microservices. This makes cloud-native applica‐
tions more agile as these microservices work independently and take minimal
computing resources to run.
https://aws.amazon.com/what-is/cloud-native
Summarizing, besides the fact that cloud native is a fundamental building block
for all these companies’ businesses, and regardless of the specific definitions, the
important point of cloud native development is the ability to leverage capabilities
from cloud computing, and to develop applications in a sustainable and scalable way
by applying microservices, containerization, and other technologies and approaches.
But this level of adoption took some time. During the last two decades, we have
seen the beginning of what I define as the new era of cloud native. New era because
of the apparition of the CNCF, which has been growing since its 2015 creation, at
both community and technology level (we will explore more details in Chapter 3).
This generated not only a wide level adoption of cloud native technologies such as
Kubernetes, but also the creation of a solid community of experts and adopters. Last
but not least, the Foundation found its way, and the effervescence of events, training,
resources, and new projects has been constant during the last years.
In parallel, the initial resistance of some public and private organizations to any cloud
related topic, due in part to data residency and privacy concerns, is progressively dis‐
appearing. The cloud paradigm is becoming clearer for everyone, as well as its advan‐
tages for adopters. People are adopting and developing in ways that enable them
to leverage the numerous cloud benefits. Bottom-up tool adoption from technical
teams is guiding company-level overall maturity and making the decision process a
bit easier, and the relevance of cloud economics and FinOps (i.e., financial operations
oriented to optimize cloud investments) practices are helping companies to trust and
control their cloud-related investments. Perfect combination, great timing.
20 | Chapter 1: Introduction
and agility. Their journey serves as an inspiration to companies worldwide,
emphasizing the importance of adaptable infrastructure in today’s digital age.
Zalando - https://www.zalando.com
Zalando is one of Europe’s leading online fashion platforms. With millions of
customers, a vast inventory, and numerous daily transactions, they needed a
robust and scalable technical infrastructure to support their operations. Zalando
has actively embraced a mix of cloud-native technologies to bolster its technical
infrastructure, including tools like Kubernetes, Helm, Argo, and Prometheus.
For example, Zalando transitioned to Kubernetes for orchestrating their contain‐
erized applications, but they needed a way to manage Kubernetes applications
effortlessly. Helm, a package manager for Kubernetes, was adopted to streamline
deployments. Argo, a set of Kubernetes-native tools, was integrated to facilitate
continuous delivery in their Kubernetes environment. Zalando could maintain
and manage Kubernetes resources using declarative configurations, streamlining
their deployment process. They also used Prometheus to monitor their services,
set up alerts, and gain insights into their system’s health in real-time. Zalando’s
journey illustrates how a blend of cloud-native technologies can supercharge a
company’s infrastructure.
The New York Times (NYT) - https://www.nytimes.com
The New York Times, one of the world’s most renown newspapers, has been
transitioning into the digital age, with a vast portion of its readership now
accessing content online. With the rise in digital users and the delivery of news
in real-time, there was a need to ensure their infrastructure could support the
demand. They initially relied heavily on monolithic applications, which were
becoming increasingly difficult to scale and maintain, especially with a growing
global readership and the demand for faster content delivery. To better serve its
readership, NYT wanted to break its monolithic application into microservices.
Kubernetes was chosen as the orchestration platform due to its technical features
and the community support and the growing ecosystem. The New York Times
began its Kubernetes journey by containerizing its applications and then moving
them to Kubernetes. The team decided to start with non-critical applications to
understand Kubernetes better and to mitigate potential risks. Over time, as they
gained confidence and expertise, more critical parts of their infrastructure were
migrated. Thanks to this, the NYT handled reader traffic spikes more efficiently,
and started to run their infrastructure in a more cost-effectively manner.
A.P. Moller - Maersk - https://www.maersk.com
Maersk is one of the world’s largest shipping companies. As the global trade
dynamics and customer expectations evolved, Maersk recognized the need to
modernize its IT infrastructure to improve efficiency, reduce costs, and offer bet‐
ter digital solutions to its customers. Their legacy systems were disparate, slow,
and often siloed, and they wanted to consolidate their IT operations and intro‐
• 7 questions related to the CNCF Ecosystem, mostly focused on high level details
of the Foundation and its mission and mechanisms
22 | Chapter 1: Introduction
• 8 questions for cloud native concepts, as an initial validation of your understand‐
ing of cloud computing and cloud native
• 8 questions for Kubernetes and orchestration topics, for to see start seeing the
level of knowledge required for the KCNA certification
• 5 questions for Kubernetes commands, which are the kubectl instructions
required to manage technical Kubernetes aspects. If you have never worked with
Kubernetes at technical level, don’t worry, we will explore these commands in
Chapter 6.
• 8 questions related to Linux fundamentals, which are technically NOT part of
the official exam curriculum we will explain in Chapter 2, but that will help
you not only understand some Kubernetes-related areas of knowledge, but also
perform better at work.
• Last but not least, 9 questions for other cloud native projects, to explore the
ecosystem of tools and projects (besides Kubernetes). We will describe them in
Chapter 3, but it is good for you to know if you have some initial or high level
knowledge of those tools.
Again, don’t try to guess or find the good answers just to “pass” this test. Focus on
choosing one based on your current level of knowledge, and take notes while you
review the correct answers, especially for those areas you know less.
Ok, you may have missed some questions, but that’s easy to solve as you will soon
know the role and details of the CNCF (Cloud Native Computing Foundation).
Explore Chapter 3 to learn more about this and other related topics.
24 | Chapter 1: Introduction
b. They are both different software design and architecture approaches, based on
one vs. several development “blocks”
c. The connection between data services from different pieces of the software
solution
d. They are similar, only front-end / UI differences
2. What are the main advantages of containers when compared to traditional virtual
machines?
a. Containers run the whole Operating System (OS)
b. Containers are lighter and portable, and more efficient in terms of resources
required
c. Virtual machines are more agile and portable
d. Virtual machines are cheaper and more scalable
3. Find the term that is not directly related to cloud native
a. Infrastructure-as-Code (IaC)
b. DevOps
c. Distributed Ledger Technology (DLT)
d. Elasticity (ability to scale units of workload up or down)
4. Which of these is an actual cloud as-a-Service model?
a. Platform as a Service (PaaS)
b. Back-end as a Service (BaaS)
c. Deployment as a Service (DaaS)
d. Front-end as a Service (FaaS)
5. Which of the following is a key advantage of a microservices architecture?
a. It requires a single technology stack for all services
b. It allows each service to be developed, deployed, and scaled independently
c. It ensures that a failure in one service will cause the entire application to fail
d. It simplifies the application as a single, indivisible unit
6. In a cloud-native environment, what is the main purpose of “containerization”?
a. Increase the size of applications for better performance
b. Package applications with their dependencies and configurations for consis‐
tent deployment
c. Replace virtual machines entirely
d. Store large volumes of data more efficiently
These questions are easy to answer if you are familiar with cloud native content, but
a bit challenging if you are just starting. Pay attention to Chapter 4, which discusses
these and others cloud computing and cloud native terms.
26 | Chapter 1: Introduction
4. Which object is responsible for scaling and managing a set of replica Pods?
a. ReplicaSet
b. Deployment
c. StatefulSet
d. Pod
5. Which of the following is a Kubernetes service that is used to externally expose
your Pod?
a. ClusterIP
b. NodePort
c. PodPort
d. ExposePod
6. In a Kubernetes cluster, what is the main role of the “etcd” component?
a. Scheduling pods on nodes
b. Load balancing the traffic to services
c. Storing configuration data in a key-value format
d. Container runtime for executing the pods
7. What type of Kubernetes controller is best suited for managing stateful applica‐
tions?
a. ReplicaSet
b. DaemonSet
c. StatefulSet
d. Deployment
8. What is the primary purpose of a “ConfigMap”?
a. To store secret data and passwords
b. To define the desired state of a pod
c. To store configuration data and parameters for pods to use
d. To allocate CPU and memory resources for a pod
These Kubernetes topics are one of the core reasons and building blocks of the KCNA
exam. Don’t worry, if you don’t have the answers for some of them, you will certainly
get the required knowledge in Chapters 5 and 6.
The Kubernetes kubectl commands are one of the main areas of knowledge, not only
for the exam but for your professional activities. We will include relevant information
and additional resources for you to make sense of these technical instructions in
Chapter 6.
28 | Chapter 1: Introduction
Test 5 - Linux Fundamentals
1. Which command is used in Linux to view the contents of a directory?
a. view
b. watch
c. dir
d. ls
2. In Linux, what is the primary purpose of the chmod command?
a. Change the ownership of a file
b. Change the file’s modification time
c. Change the permissions of a file
d. Change the location of a file
3. Which of the following directories typically contains system configuration files?
a. /bin
b. /etc
c. /usr
d. /tmp
4. Which command in Linux is used to display your current working directory?
a. cwd
b. dir
c. pwd
d. locate
5. What is the primary role of the Linux kernel?
a. Providing a graphical user interface for users
b. Running shell commands and scripts
c. Managing the system’s hardware and resources
d. Offering network services like DNS and SSH
6. Which command in Linux is used to kill a running process?
a. terminate
b. stop
c. exit
d. kill
7. Which file contains the system-wide environment variables?
Depending on your existing background and experience, you may have passed or
failed most of these questions. In Chapter 3, we will point to a variety of Linux-related
topics, as well as some existing O’Reilly and external resources that will help to catch
up and gain some base knowledge, before taking the KCNA exam.
30 | Chapter 1: Introduction
a. Rook
b. Linkerd
c. Vitess
d. etcd
5. If you need a distributed logging project, which one would you likely opt for?
a. OpenTracing
b. CoreDNS
c. CNI (Container Network Interface)
d. Fluentd
6. Find the CNCF project from all these open source initiatives
a. Apache Atlas
b. CRI-O
c. Feathr
d. Eclipse Data Connector
7. Which CNCF project provides a high-performance, lightweight service mesh that
provides runtime debugging, observability, and security?
a. Istio
b. Jaeger
c. Argo
d. Linkerd
8. If you’re interested in a CNCF project that offers distributed tracing to help
troubleshoot latency problems in microservice architectures, which one would
you refer to?
a. CRI-O
b. Vitess
c. Jaeger
d. TIKV
9. Which CNCF project aims to provide a consistent and platform-agnostic way for
plugins to handle network configuration of containers?
a. Helm
b. Linkerd
c. CNI (Container Network Interface)
This part is relatively complex for new learners, and even if the goal is not to memo‐
rize all CNCF project names, you will need to be aware of the most important ones
(especially graduated projects with relevant traction from developers and companies).
The KCNA certainly includes questions related to them, and this Study Guide will
help you develop this knowledge. As we can see in Figure 1-2, you will need to pay
special attention to some parts of the books and other resources we will mention in
this KCNA Study Guide, depending on your weakest areas of knowledge from the
self-questionnaire, and your preliminary level of experience and knowledge:
Summarizing, these are relatively basic questions that cover only part of the KCNA
exam scope. However, if you have found them difficult, and hesitated while answer‐
ing some of them, you will need to carefully explore Chapters 2 to 7. Also, the
KCNA exam won’t go into highly practical Kubernetes details (compared to other
Kubernetes exams, the KCNA does not include hands-on labs for live exercises), but
you need to know the “ABC” of it. The key is to gain at the very least, basic knowledge
for all these relevant areas.
32 | Chapter 1: Introduction
PLACEHOLDER EXPERT INSIGHTS
Interview with Walid Shaari
Adrian: Hi. Welcome to this episode of this series of expert insights for the KCNA
exam. Today we have the pleasure to welcome Walid Shari who is an expert of the
Cloud Native community.
Walid: Thank you for having me here.
Adrian: Many of our learners here are getting started on their Cloud Native journey
so let’s start with an introduction. Who is Walid and what’s your relation with the
Cloud Native ecosystem?
Walid: I’m currently working for Amazon Web Services for the public sector as a
journalist and as a part of the containers community team, advocating for container
services. Before this, I was leading the Ansible and the Docker community in Saudi
Arabia where the adoption of containers and cloud native technology is still in its
early stages. So when Docker took the world by surprise, I started the meetup and
it was quite an eye opener. There was a lot of interest in containers, especially from
developers. And it opened a lot of doors for me. In fact, my current career is a
result of this community. So for the question of who’s Walid, I see myself as a bridge
between different cloud communities. I’m passionate about open source. And one
thing about CNCF and about a cloud native community, you can see that it’s the best
community in terms of inclusiveness and knowledge sharing. This sort of culture is
exhibited during KubeCon sessions and other events.
Adrian: Because everyone demystifies the fact that we cannot learn everything, the
ecosystem is huge. It’s not only about Docker and Kubernetes now, it’s about all the
projects around the community, all the certifications, all the training. So we are part
of an ecosystem.
Walid: Exactly. When the CNCF, the Cloud Native Foundation, was established with
the first project, Kubernetes, donated by Google at that time and helped by everybody
else. They saw the gap between the enterprises, the organizations and the people. One
of them being the skill gaps which they solved by providing curriculum, trainings
and measures for the companies to check. Are these people qualified enough to be
on my team or how can I upskill my team to be good enough? How can I have some
kind of measures to qualify people within my team or even incentivize them to learn?
Because of this, I’m very grateful to the CNCF in terms of finding these gaps and
addressing them and bridging between us, the individuals and the enterprises that
need us.
Adrian: You have been leading activities of CNCF on different journeys and events,
even some local events. I think it was exactly related to this exam as well, right?
34 | Chapter 1: Introduction
you provide was perfect, because this is the kind of detail that you can expect. That
is normally unexpected when we talk about an associate level kind of certification.
That’s the reason why we need to go deeper in our preparation, even if it’s an intro
level certification. Because that’s the kind of question we will get. And the other part
is, all the command lines. You mentioned the command lines. If I ask you about the
trickiest kind of questions that people can expect in a KCNA, what do you think will
be the trickiest one?
Walid: The trickiest one for newcomers is they need to remember the command lines.
If you are on the terminal, it will be very easy to get help but if you are a newcomer
and you don’t have enough hands-on experience, you might find it tricky to find the
right answer. Let’s say, for example, they are asking you about the CPU utilization
across nodes for the pods, or the pods utilization or something like this. You need
to be familiar with the Linux command line. In Linux command line, you usually
use ps (process show) and aox, or wef. This concept is not in the Kubernetes world.
In the Kubernetes world, there’s kubectl top. You may be familiar with kubectl top,
or kubectl top, or kubectl ps in Kubernetes. But I can try to confuse you with some
Linux commands that look genuine, but they are not. It could be a Linux command
mixed with kubectl commands.
Adrian: And if we add the prefixes, affixes and stuff, it gets tricky.
Walid: Yes. The other thing confusing is when you go into details. So I would say 60%
of the exam is actually Kubernetes focused. The other 40% is the ecosystem. It’s the
CNCF organization and the ecosystem of data observability and application delivery,
GitOps and things like that. So in this 60%, some of it really goes into details, like the
scheduling. I remember they do really go into details. I remember I got one question
wrong. I don’t remember it exactly now, but I got it wrong because I didn’t think
about it at that time, and I didn’t really prepare. So basically, for newcomers, who this
exam is actually targeted for, they need to know their theory, and they need to have
some hands-on knowledge. Andrew Brown has a free course, 14 hours, less than 20
hour course, with some hands-on follow-up. Basically, he walks you through some
exercises. A Cloud Guru, they have also a course with hands-on training. I’m sure you
also provide enough hands-on exercises and labs that people can go through. It’s a
pity that we lost some of the online available hands-on materials on Katacoda. Some
vendors like Red Hat have some available training but it’s very focused on their own
distribution, OpenShift. So they have some tutorials that people can follow in GitOps.
The other 40% of the test is more what I call level 100. For example, what is service
mesh? And what are the prominent solutions out there? What about solutions that
have been promoted or the ones that are not in the sandbox? What is the sandbox?
What does it mean for a project to be graduated? The life cycle of a project in CNCF.
Adrian: You mentioned the ecosystem including activities and projects. We know that
besides Kubernetes, there are other projects that are relevant for monitoring or for
36 | Chapter 1: Introduction
concepts that are related. For example, I’m a telematics engineer, so working with
Kubernetes is more natural to me. Not saying that it’s easy, it’s very difficult, but it’s
more natural for me to understand because I have that background. With the variety
of learners profiles that we get on this exam, we can assume that people will have
stronger background in some specific areas and then weaker in others, so we are
encouraging them to analyze where they need to improve based on their existing
knowledge. What’s the best way to practice today’s exam topics? You have mentioned
a couple of them.
Walid: Nowadays, I mean, if you want to practice, if someone wants to practice on
his own laptop and learn about Kubernetes, there is Kind, K-I-N-D, Kubernetes and
Docker. First, they need to cover the basics, especially with Docker, where there are
good free courses out there. There is Kind, Minikube, and other services where you
can have a managed cluster. There are lots of YouTube contents. There are books, this
book, for example. There are meetups. I feel the meetups are good because they are
interactive. And there are the conferences like KubeCon and Kubernetes Community
Days. So CNCF saw that they cannot scale when it came to conference and that these
conferences is getting really huge and really starting to be expensive. So one way to
scale them up, doing Kubernetes Community Day around the world, and we see lots
of them nowadays. Usually, there are workshops around them also. Around the big
events, usually there are other events for specific technologies like GitOps, service
mesh, networking, special distributions, and the operator framework from Rancher.
They were doing this like every KubeCon, how to create an operator. The operator in
Kubernetes is how to combine human knowledge, operator knowledge, and technol‐
ogy, especially for stateful applications, into a packaging. And this packaging becomes
easy to install a software stack like Postgres cluster and how to update it, how to
monitor it. So it takes you from day zero to day two and beyond, hopefully. There
are many resources. The CNCF Slack is one resource. The CNCF events in terms
of meetups, in terms of chapters, in terms of conferences. Books from O’reilly and
from all other publishers. And the local events. There are some GitHub resources
also where they aggregate and curate some content. And the best thing is actually to
participate in projects. If someone wants to learn more about something and he has
a business problem or university challenge or project, it’s better to participate in this
project and explore and ask the community. Just ping somebody from the community
to ask, especially in Slack.
Adrian: In the book, we explain the notion of a contributor and maintainer, two
different types of roles. Have you been a contributor to any project or a maintainer?
Walid: Unfortunately, not for a software project. I have been maintaining resources
for the CKA and CKS resources. For projects, when I see a bug, when I see an
issue, I usually raise an issue at the very least, if I cannot create a pull request. I’m
trying to get involved with the documentation. The easiest thing, especially in the
CNCF-related project, there are tags with the first-time contributors. Basically, these
38 | Chapter 1: Introduction
Adrian: This is wonderful. If we had to summarize, I think that one of the words
for people to prepare their excellent journey is awareness. Just being aware of the
existence of things, the kind of projects that are there, the ways to contribute or to
participate, the kind of questions that you can get. That’s what we’re trying to cover
in this book as a study guide. We always mention, no book or study guide will be a
single resource to prepare for Kubernetes or Cloud Native topics. You need to go and
check all the resources and complement it and create your own mix. But this is very
aligned with what we’re trying to do there.
Walid: Yes. Different people have different tastes. Some people like books, some peo‐
ple like videos, some people like podcasts. Whatever floats your boat. But start with
the Linux Foundation, start with the GitHub repo. CNCF have a GitHub rebel for
their exam curriculum. They keep updating it. They actually mention some resources
there.
Adrian: Yes. Very good selection. By the end of the book, we are also adding, let’s
say, the champions in the community, including yourself, your personal repositories,
which you are recommending already, like a good structure for this exam. I remem‐
ber other people doing the same. These are good resources that are helping learners
around the world. Thank you.
Walid: Yes. Thank you.
Adrian: Well, I’m very happy we got some time to discuss these topics. I cannot wait
to release everything, the material, and what you mentioned, not in the format of a
video or a book. It’s small pieces for people to continue improving their learning and
to pass the exams. It’s not necessarily easy, even if it’s beginner level, but I think that
this is the way.
Walid: This is the way. True. Perfect.
Adrian: Well, thank you very much again. Have a lovely day.
Walid: Thank you, Adrian. Thank you very much.
We hope you have enjoyed this expert discussion with Walid Shaari. He is certainly
one of the top KCNA experts out there, and part of the original team who created the
exam questions. Feel free to explore his valuable KCNA resources, such as his KCNA
repository with relevant information to prepare the exam, and the amazing series
of three video sessions (including part 1 and part 2, and a deep dive of container
orchestration) with him and his colleague Brad McCoy.
This is the first of a series of expert interviews that will bring you applied and
exclusive insights, related to the KCNA exam and to the cloud native ecosystem and
its endless possibilities for upskilling, collaboration, and professional opportunities.
40 | Chapter 1: Introduction
Figure 1-3. Study Notes - Chapter 1
Learning Kubernetes and getting certified is not an easy task. Just a few years ago,
available certifications on the topic were quite advanced for newcomers, but the
KCNA exam and certification are now feasible learning goals, with a good balance
between entry level elements and useful pieces of knowledge for Kubernetes practi‐
tioners. Also, it is an initial proof of value and knowledge for candidates looking for
professional opportunities.
If you are reading this book, that means you want to get the KCNA certification
from The Linux Foundation, but you may know less about Foundation itself (don’t
worry, we will cover it during this second chapter). Also, you should know that
passing this exam demonstrates a strong understanding of the Kubernetes and cloud-
native fundamentals, but it will also show that you have a grasp of the principles of
cloud-native development, security, etc. Thus, having this certification under your
belt prepares you for further advanced and specialized credentials as well as signaling
to the industry of your relevant and up to date knowledge on the topic.
43
But preparing and passing an online exam, especially if it is the first time you do it, is
not easy. In this chapter we will explain to you the exam details and logistics, in order
to set us up for success. We will be reviewing its structure, topics covered and what to
expect on game day.
Let’s start by learning about the exam creation and its origins.
The Linux Foundation was born in 2007 from the merger of two existing organiza‐
tions, the Free Standards Group (FSG) and Open Source Development Labs (OSDL).
It is a non-profit organization that “provides a neutral, trusted hub for developers and
organizations to code, manage, and scale open technology projects and ecosystems”. It
supports companies and developers to identify and contribute to projects that address
the challenges of industry and technology for the benefit of society.
Seeing that the CNCF is just one of the many organizations under The Linux Foun‐
dation umbrella, it is not surprising that it is a wealth of knowledge when it comes
to online learning resources and documentation. The Linux Foundation works with
all these open source communities in order to create training, resources, and industry
standard certifications. It offers various learning paths composed of both self-paced
and instructor-led courses and materials, as well as certifications organized per skills
and technology groups.
The KCNA that is the subject of this book falls under The Linux Foundation’s
Cloud & Containers Learning Path. This and other learning paths include vast selec‐
tion of system administration, Linux development, telecommunications networking,
cybersecurity, DevOps, and other cloud topics. Based on the results of the self-
questionnaire in Chapter 1, you may want to explore the different learning paths,
based on the potential knowledge areas of improvement you have detected (i.e. those
where you feel you need to improve a bit more). For example, if you have failed most
of the Linux-related questions, you can go to the System Administration Learning
You can consider these two URLs as the main source of terms and
descriptions, very useful for your cloud native upskilling, including
the KCNA exam preparation (some questions focus on core terms
that are described in these two pages).
Now, even if the official KCNA curriculum is the main scope reference for this
exam, you may find some additional pieces of knowledge that cannot be intuitively
deduced from the curriculum itself. For example, you may get a mix of descriptive
and command-related questions, which makes sense but maybe wasn’t clear for
newcomers when they read the official curriculum. Or the questions may rely on
specific preliminary knowledge, and some previous experience. For that reason, we
will add a few additional areas of knowledge that we believe will support both your
KCNA journey, and the post-certification work activities you choose.
• Bookmark, read, and understand the main kubectl commands. We will of course
discuss them later in Chapter 6, but the official documentation includes those
that you need to know, including the syntax and a very useful cheat sheet you can
print and study.
• Play with the O’Reilly Kubernetes Sandbox. This and other sandboxes offer a
one-click on demand workspace for you to start exploring and testing different
commands and configurations.
• Leverage the existing interactive Kubernetes labs from O’Reilly. They are based
on the amazing Katacoda platform, and they offer a variety of step-by-step
scenarios (e.g. defining and deployment resources, launching nodes). A big part
of them were developed by one of our industry experts, specifically Benjamin
Muschko who will join us in Chapter 4.
• Besides understanding Linux and its origins, you could focus on the command
line, which is like the kubectl (a user interface without windows or icons, just
text). For you to get some initial knowledge, you can leverage existing cheat
sheets, or some applied tutorials for renown industry organizations.
• Same as before, you can leverage the O’Reilly Sandbox platform. Concretely, you
have two options for a regular Linux command line, and a second version for
Rocky Linux (in case you want to work with Red Hat-compatible environments).
The O’Reilly Learning platform also includes specific Linux scenarios.
• Ideally, if you have time and you want to go deeper, you can always take the
Linux Foundation Certified IT Associate (LFCA) exam, or at least take a look at
the related intro level and free resources.
• Checking the vast list of API topics from the O’Reilly Learning platform.
• Leveraging didactic material from industry actors such as Postman (which is an
API platform for building and using APIs), and have some 101 intro videos and
documentation.
Part 9 - Telematics
You may not even know this, but there is an area of study called telematics that
combines telecommunications, electrical engineering, and IT / computer science.
One of the core key building blocks is what we call networking, which is the ability to
connect diverse systems so they can operate interactively. For example, the internet is
a network that connects your personal device with remote resources such as websites,
and online platforms.
This area relies on relatively complex knowledge, traditionally related to engineering
jobs. However, if you work with cloud native tools (including Kubernetes), the notion
of networking will come to you at some moment. We will cover this topic in Chapter
6, but don’t hesitate to take a look at the official K8s Networking documentation
before.
3. In the authentication page you can see in Figure 2-5, create an account (sign up)
if you are a first time user, or Sign in with your credentials or SSO (i.e., existing
Gmail. Facebook, Github, and LinkedIn accounts) if you already have one. If
you’re creating the account for the first time, a request to verify your email will be
sent to the email address provided.
1. From your new learning dashboard, scroll down and find the browse section.
Once there, click on search for content, and type “KCNA” to find the KCNA
exam from the list of courses and exams available. As seen below in Figure 2-6,
there are two options to register for the KCNA exam. You can register for the
exam alone for $250, or opt for the collection that includes the exam and the
Kubernetes and Cloud Native Essential Training (LFS250) for $299. As a test
taker, you have the choice to take it in English or in Japanese.
1. After selecting the KCNA exam, from Figure 2-7 click Enroll Now and complete
the checkout and payment process. You can make the payment online using a
credit card, and/or add any coupon you may have from an ongoing campaign,
or vouchers from your company’s membership (it can be the case for CNCF and
The Linux Foundation members).
1. After you complete the process, you will be able to explore the checklist, and to
read the exam handbook.
2. Finally, you can proceed to schedule your exam. This option is only available
once you complete the preliminary steps. Pay attention to the date you prefer,
and the timezone. Online exam booking platforms will adapt to your specific
timezone, and let you choose among the available exam languages.
This series of steps conclude the initial logistics required for your KCNA exam. Now,
let’s proceed with the preparation details.
As you know already, the KCNA exam covers not only Kubernetes topics, but also
other cloud native fundamentals. That includes the CNCF structure, which is related
to how an open source community works, including its governance, project incuba‐
tion processes, individual contribution, etc. We will cover all that in Chapter 3. As a
KCNA candidate, this information is important for you because it is part of the scope
of the exam questions, but it is also very relevant for your cloud native journey.
69
From CNCF’s website: “The Cloud Native Computing Foundation (CNCF) hosts critical
components of the global technology infrastructure. We bring together the world’s top
developers, end users, and vendors and run the largest open source developer conferen‐
ces. CNCF is part of the nonprofit Linux Foundation”.
Currently, the Foundation is supported by companies and individual members who
are committed to advancing the cloud native landscape. The CNCF is governed by
a board of directors and is overseen by the The Linux Foundation, the non-profit
organization that promotes the use of open source software. The CNCF website itself
is a great source of information and resources.
Reality is that the establishment of the CNCF in 2015 had a great impact on the cloud
native industry, as it provided a central hub for the development and promotion
of cloud native technologies and best practices. Before that, there was no single
organization that was dedicated to advancing the cloud native landscape, and many
different technologies and approaches were being developed in isolation.
The CNCF has played a key role in standardizing and consolidating the cloud native
ecosystem, and has helped to drive the widespread adoption of cloud native technolo‐
gies by providing a forum for collaboration and sharing of ideas and knowledge. The
CNCF has also helped to establish a set of common practices and standards for cloud
native development, which has made it easier for organizations to adopt cloud native
approaches and technologies.
Overall, the birth of the CNCF in 2015 had a significant and lasting impact on the
cloud native industry, and has helped to accelerate the adoption and development
of cloud native technologies and practices, including Kubernetes but also other great
cloud native projects we will explore in this chapter, as they will be relevant for your
exam preparation.
Concretely, there were several key events that led to what we understand as cloud
native and containerization today. Starting from the old times:
The containerization origins
• 1979 - UNIX (a family of operating systems that derive from the original AT&T
UNIX OS, developed by Bell Labs) releases the chroot system call in its version
7. chroot is a command in Unix and Linux that allows changing the root direc‐
tory for a running process and its children, often used for sandboxing, testing,
and recovery purposes. The idea was to provide an isolated disk space where
processes think it’s their root, but in reality, it’s just a subdirectory of the real root
filesystem. This can be seen as the predecessor of what modern containerization
is, even if it had more limited features.
• 1999-2000 - FreeBSD (an operating system used to power servers, desktops, and
embedded platforms) releases jails, which are an evolution from the chroot, and
relatively similar to today’s containers.
• 2003-2004 - Google develops the Borg system. Technically, Borg was a “cluster
manager that runs hundreds of thousands of jobs, from many thousands of dif‐
ferent applications, across a number of clusters each with up to tens of thousands
of machines”. Google released a paper that explains its technical details. Basically,
Borg was the predecessor of Kubernetes, but it wasn’t open source yet.
• 2008 - The Linux Containers (LXC) project is officially released, as an umbrella
project behind initiatives such as LXC, LXCFS, distrobuilder, libresource and
Concretely, besides the governing board, there are other key elements for the CNCF
structure that help make the community dynamics work:
Technical Oversight Committee (TOC)
A group of technical experts who oversee the technical direction, provide neutral
guidance, and evaluate the projects submitted to CNCF. The TOC plays a critical
role in steering the foundation’s technical vision and direction, and it’s responsi‐
ble for the technical leadership and decision-making in different areas such as
project oversight, and connection with the rest of CNCF stakeholders.
End User Community
The group of cloud native adopters that provides insights and feedback on the
direction of cloud-native technologies, ensuring that projects remain relevant to
real-world needs. From CNCF’s website, the end user community is a group of
“experienced practitioners who help power CNCF’s end user-driven open source
ecosystem, steering production experience and accelerating cloud native project
growth”. You can explore the full list of end user companies, including those from
the previously mentioned success stories.
Special Interest Groups (SIG)
These groups focus on special topics based on areas of interest or needs from
the community, which are not directly related to one single project. Transversal
topics such as contributor strategy, storage and security, network, and others are
part of the existing group of SIGs.
Community Forums
The CNCF boasts an active community that collaborates across various platforms
and channels, by maintaining a number of online platforms where members can ask
questions, share information, and collaborate on projects. Concretely:
Online discussion
The CNCF community has an active Slack workspace where members can dis‐
cuss topics related to cloud-native technologies, ask questions, and share insights.
Within the CNCF Slack, there are channels dedicated to specific CNCF projects
like Kubernetes, Prometheus, and others. Recommendations:
• Join the cloud-native.slack.com workspace, and explore the different chan‐
nels. For your KCNA journey, the #certification channel will help you con‐
nect with other exam candidates and post any questions.
• In addition to the CNCF Slack, you may want to take a look at the dedicated
Kubernetes Slack workspace, which is not managed by CNCF but includes
the #kcna-exam-prep and #kubernetes-novice channels that contain good
advice and resources too.
Mailing Lists
CNCF uses mailing lists extensively for asynchronous communication, including
general CNCF mailing lists and project-specific lists where discussions about
development, issues, and updates related to that project take place. Recommen‐
dations:
Educational resources
The CNCF offers a range of educational resources, including online courses, webi‐
nars, and tutorials, to help people learn about cloud native technologies and best
practices. As a KCNA candidate, these resources are gold for you to understand cloud
native topics, and to prepare your exam:
The CNCF Landscape,
An interactive tool that allows you to get a single view of CNCF-related actors
and projects, with filters by provider, CNCF project, industry, etc. It is (really)
full of information so you will need to navigate it, but it is a very rich and always
updated source of information. For purely illustrative purposes, because it cannot
be shown in just a single page (or even a website), you can see in Figure 3-3 what
the landscape looks like, or you can simply visit it by typing l.cncf.io:
In parallel with this cloud native journey, you can explore the cloud native
maturity levels, which is a CNCF framework for adopter companies to under‐
Sandbox projects
Entry level for new or experimental projects, offering neutral hosting and access
to the broader CNCF community for growth and exposure. The key criteria
to enter the sandbox stage are adoption of the CNCF Code of Conduct, TOC
sponsorship, and an open-source license. This stage is their way to start.
Incubating projects
Promising projects, with substantial level of community contributions, an active
user base, and integration within the CNCF ecosystem. They receive mentorship
and increased visibility, but they require TOC sponsorship and passing a CNCF
security audit.
Graduated projects
Mature projects with wide adoption, clear governance, and regular releases. They
have been fully tested and have shown that they are stable and widely used. These
projects obtain premier visibility within the CNCF ecosystem, and prioritized
access to CNCF resources and support.
Archived projects
Archived projects aren’t being maintained or supported by the CNCF anymore,
including the interruption of marketing and other privileges that are only avail‐
able for active projects. Some examples of archived projects? Brigade, Open
Service Mesh, OpenTracing, and rkt.
Related to the CNCF projects, there are two levels of involvement:
In this chapter, we will explore the essential concepts related to Cloud Computing.
Having a solid understanding of the basics allows us to build upon them for advanced
implementation. For this purpose, and even if we know that every exam candidate
may have different levels of preexisting knowledge, we will cover the end-to-end of
cloud-related topics, from the basics to other advanced notions that you will leverage
later in Chapters 5 and 6 for Kubernetes.
Concretely, we will cover the introduction to general cloud computing, its evolution,
infrastructure, commercialization, and the road to cloud native. The goal is to pro‐
vide a holistic picture of how cloud computing evolves into what we know today. The
material in this chapter includes explanations of topics covered in the KCNA exam.
93
General Cloud Computing
Let’s start from the beginning. What is cloud computing? And why do we mean
“clouds”?
The cloud refers to servers that are remotely accessible over the internet. Softwares
and databases are run on these servers. Imagine buildings (often referred to as data
centers) full of physical servers. A company that owns these data centers rents out
the ability to use the computing powers of these servers to other users or companies.
These users then do not have to manage physical servers themselves. Running their
applications on these clouds, a computing model that allows remote allocation of
computing resources owned by a third party for external users is what is generally
referred to as Cloud Computing.
More officially, the CNCF defines cloud computing as “a model that offers compute
resources like CPU, network, and disk capabilities on-demand over the internet.
Cloud computing gives users the ability to access and use computing power in a
remote physical location. Cloud Service Providers (CSP) like AWS, GCP, Azure,
DigitalOcean, and others all offer third parties the ability to rent access to compute
resources in multiple geographic locations”.
Before the advent of cloud computing, organizations typically had to rely on building
and managing their own physical infrastructure to support their computing needs.
This is typically called on-premises or bare metal infrastructure. It involved procur‐
ing servers, storage devices, networking equipment, and other hardware components,
as well as establishing data centers or server rooms to house and maintain these
resources, which are costly and often cumbersome to manage.
Also, you may have heard the term monolithic application. The term refers to a
traditional software that is a single piece of complicated software that continues to
increase in size and complexity as new features and abilities are added onto it. In
order for the monolith to function, it must reside and run on a single system that
supports it. This hardware system of specific capacity such as memory, networking
capabilities, storage, and computational ability is complex to build, maintain and
expand. Scaling up or down a part of this operation is difficult as it requires another
server with a load balancer. To do this, organizations have to assess the needs,
weigh the benefits against the cost, go through a procurement process and once
obtained, set it up. This process can take months or years, making it difficult for the
organizations to respond quickly to the demand. This is not to mention the work that
the engineers have to do in order to keep all server instances up to date with upgrades
and security or operational patches. The disruptions in service are hard to avoid.
If a monolith is a hard to move mammoth, then microservices is like a colony of
bees. Bees come together to carry out many well-organized functions. Their shapes
and sizes are easy to adapt. They can adjust and move with agility. That is why organ‐
Now, think of the fact that microservices are small and independent. They can
communicate with each other easily. That means there is no need for them to reside
together on a single hardware. The hardware can similarly be smaller and separated.
Each microservice can find the hardware that best suits their characteristics and
needs. This is where cloud computing comes in.
As connectivity, storage, and hardware technologies advanced, cloud computing
became an answer to the challenges associated with traditional computing and mon‐
oliths. With Amazon’s launch of Amazon Web Services in 2006, which enabled
organizations to rent server space remotely, cloud computing gained mainstream
adoption and revolutionized the way organizations approach their computing needs.
With cloud computing, organizations can leverage the resources and services of cloud
service providers, accessing computing power, storage, and other resources over the
internet. They can scale resources up or down on-demand, pay for what they use, and
benefit from the provider’s expertise in managing and maintaining the underlying
infrastructure. More importantly, it enabled organizations to focus more on their core
business activities and software development rather than worrying about hardware
maintenance, infrastructure planning, and scalability constraints.
IaaS, Infrastructure-as-a-Service
CSPs provide users with computing, networking, and storage resources to create and
run their applications. Users have a high level of control but also higher maintenance
required. In this case, the users rent barebone server space and storage from the
providers. A popular analogy for IaaS is that it is like renting a plot of land. The
possibilities are endless. You can build anything you want on it. However, know that
you will bear the cost of building material and equipment.
One of the reasons why IaaS is appealing to the users is to reduce capital expenditure
and transform them into operational expenses. Instead of procuring the hardware
and building their own cooling server room, they simply engage the CSP to provision
and manage the infrastructure which can be accessed over the internet.
Other benefits of IaaS include scalability efficiency and boosting productivity. In the
scenario where the business need is unpredictable, IaaS allows the users to respond
to the need by quickly increasing or decreasing the resources as needed. The IT
professionals in the organization do not have to worry about maintaining a data
center with high availability. The CSPs are in charge of ensuring that the service is
available to the users at all times, even when there is a problem with an equipment.
This allows the organization to concentrate on the strategic business activities.
Examples of IaaS are Amazon Web Services (AWS) EC2, Microsoft Azure Virtual
Machines, and Google Compute Engine.
SaaS, Software-as-a-Service
The CSPs offer applications plus the required platform and IT infrastructure over the
internet. The cloud providers build, maintain and offer the applications for the users
on demand. Generally, these ready-to-use products are available for a subscription
fee. In this model, the users do not have to build the building themselves. The
building is available for them to rent, all they have to do is move in and pay rent.
The renter can use it as if they own the building most of the time. However, knowing
that while the landlords will fix things when they are broken, not all functions can be
customized according to their needs.
SaaS makes it very easy for the organizations to implement new technology with
very little effort to train their staff. There is no need for an IT staff member to even
manage the software as it is all taken care of by the provider.
This model is simple to use with limited maintenance required for the users. Popular
SaaS that you may be familiar with include Salesforce, Google Workspace, Slack and
Dropbox.
CaaS, Container-as-a-Service
While you may be more familiar with IaaS, PaaS and SaaS, CaaS is gaining popularity
among IT professionals, particularly microservices developers. CaaS refers to the
automated hosting and deployment of containerized software packages. It is usually
available for a subscription fee, hence, the as-a-service concept. Designed based on
industry standards such as Open Container Initiative (OCI) and Container Runtime
Interface (CRI), CaaS makes it easy to move from one to another cloud provider.
FaaS, Function-as-a-Service
CSPs enable users to create packages of code as functions without maintaining any
infrastructure. This is a very granular cloud usage option which makes it especially
interesting for development activities. To use the same analogy, FaaS allows the renter
to pay only for the room in the building that’s being used at the time.
FaaS model allows developers to write and deploy code in the form of functions that
are executed in response to specific events or triggers. Since the code is deployed
without provisioning or managing servers or backend infrastructure, it is often
referred to as serverless cloud computing. FaaS is suitable for users who wish to
run specific functions of the application without managing the servers. The users
only have to provide the code and pay per duration or number of executions.
Some CaaS tools mentioned in the previous section are integrated within an environ‐
ment that is serverless, so some CaaS solutions have the characteristics of FaaS. The
key here for FaaS is that the underlying infrastructure and platform is all taken care of
by the providers. Some examples of FaaS are AWS Lambda, Google Cloud Functions.
Public Cloud
The physical cloud resources (servers) are organized and distributed by the cloud
provider. This means that two or more companies can use the same physical infra‐
structure. As opposed to public parks or libraries, public clouds actually have a
Private Cloud
The whole “cloud” is for one single organization. It is commonly used in environ‐
ments where a very high level of control is required. Private clouds allow the users
to leverage all the advantages that come with cloud computing but with the access
control, security, and resource customization of an on premise infrastructure. Some
organizations may opt for private cloud if they deal with sensitive and confidential
information or strict regulations that mandates greater visibility and control on
access, software/hardware choices and governance. What’s interesting here is that
because private clouds (often dubbed “on-prem” solutions) are built with the same
fundamentals as public clouds, it allows the organization to easily migrate to or
share its workload with public clouds, leading to something called “Hybrid Cloud”
solution.
Hybrid Cloud
Hybrid cloud refers to when an organization decides to utilize both public and
private clouds for their cloud computing, storage, or services needs. Sharing the load
between private and public cloud options allow the organization to create a versatile
setup based on your needs. This approach is one of the most common setups today
because of its flexible options to optimize the usage and costs including risk sharing.
Multi Cloud
A type of Hybrid cloud setup. Some organizations may choose to use cloud technol‐
ogies from multiple providers. This can make sense in terms of cost optimization
(when provider A is cheaper than provider B for some specific services), or to
leverage the combined offer of services (which may be different depending on the
combination of providers)
Organizations choose their “-aaS” models and type of clouds based on their own
business needs and preferences. After they choose their cloud, they have to decide
how to deploy their systems, services, and applications. That’s where the concepts of
virtualization, containerization, and orchestration come.
Virtualization
Virtualization is a technology that enables the creation of virtual versions or repre‐
sentations of physical resources, such as servers, operating systems, storage devices,
or networks. It allows multiple virtual instances to run simultaneously on a single
physical machine, each isolated and behaving as if it were running on its dedicated
hardware.
In traditional computing, each physical server or computer typically runs a single
operating system and hosts specific applications. However, virtualization enables
Containerization
Containerization is a technique in software development and deployment that
involves encapsulating an application and its dependencies into a self-contained
unit called a container. A container provides a lightweight and isolated runtime
environment that allows applications to run consistently across different computing
environments, such as development machines, testing environments, and production
servers.
Containers are created from container images, which are pre-built packages that
include the application code, libraries, runtime environment, and any other depen‐
dencies needed to run the application. These images are typically created using
containerization platforms like Docker.
The key components and concepts in containerization are:
Container Image –
A container image is a static, read-only file that serves as a blueprint for creating
containers. It contains the application code, along with the necessary runtime
dependencies, libraries, and configuration files. Container images are portable
and can be shared and distributed across different systems.
Container Runtime –
The container runtime is responsible for running and managing containers. It
provides an isolated and secure execution environment for containers, ensuring
that they have their own filesystem, processes, and network stack while sharing
the host machine’s operating system kernel.
Containerization Platforms –
Containerization platforms, such as Docker and container orchestration systems
like Kubernetes, provide tools and services for building, running, and managing
containers at scale. They offer features like container orchestration, networking,
storage, and monitoring to simplify the deployment and management of contain‐
ers.
Container Orchestrators –
Container orchestrators, such as Kubernetes, manage the deployment, scaling,
and monitoring of containers across multiple hosts or nodes. They automate the
scheduling of containers, ensure high availability, and provide advanced features
like load balancing and service discovery.
Benefits of containerization include:
Orchestration
Orchestration refers to the automated management, coordination, and execution of
complex tasks or processes, like a musical conductor in a symphony. It involves
controlling and organizing various components, services, and resources to achieve a
desired outcome efficiently and reliably.
Orchestration plays a crucial role in the deployment and operation of distributed
applications, especially in cloud computing and microservices architectures. It helps
automate and streamline processes that involve multiple interconnected compo‐
nents, such as provisioning and scaling resources, managing dependencies, handling
deployments, and ensuring high availability.
Kubernetes is a type of container orchestrator. Basically, it manages service-oriented
application components that are tightly scoped, strongly encapsulated, loosely cou‐
pled, independently deployable and independently scalable. Each service focuses
on a specific business functionality and can be developed, deployed, and scaled
independently. In a microservices architecture, an application is decomposed into a
set of self-contained services, where each service handles a specific task or business
capability.
Besides other success stories we have seen before, here are a couple of cloud native
examples. Netflix is a prime case of a cloud-native application. It utilizes microser‐
vices architecture and runs on Amazon Web Services (AWS) cloud infrastructure.
1. Plan - This is the initial phase where the team plans out the features, improve‐
ments, or fixes that they want to implement. Requirements are gathered, and
tasks are prioritized.
2. Code - Developers write and commit the source code. This could involve feature
development, bug fixing, or code refactoring. Tools like Git are commonly used
for version control.
3. Build - The source code is compiled into executable artifacts. Build tools, such as
Maven, Gradle, or Jenkins, automate this process, ensuring that the code can be
reliably built into software.
4. Test - Automated tests are run to ensure that the software is reliable, secure,
and meets the quality standards. This can encompass unit tests, integration tests,
system tests, and more.
5. Release - Once the software passes tests, it’s ready for release. Automated deploy‐
ment tools can push the software into staging or production environments.
The release process might involve Continuous Integration (CI) and Continuous
Deployment (CD) pipelines.
6. Deploy - The software is deployed to production servers. This step is sometimes
combined with the release, especially in CD environments where code is auto‐
matically deployed once it passes all tests.
7. Operate - After deployment, operations teams monitor and maintain the system.
This includes ensuring uptime, scaling resources as necessary, and addressing
any runtime issues.
These steps plus the previously explored 12-factor manifesto lead us to the notion
of CI/CD. This acronym stands for Continuous Integration and Continuous Deliv‐
ery/Deployment. The “CD” part often refers to two possible options:
Continuous Delivery –
An extension of CI. It ensures that the code is always in a deployable state after
passing CI pipelines. In this case the release process is still manual.
Continuous Deployment –
While Continuous Delivery ensures code is always deployable, Continuous
Deployment goes a step further. Every change that passes the CI tests is automati‐
cally deployed to the production environment without manual intervention.
These are fundamental principles in the DevOps philosophy, emphasizing the auto‐
mation of the software development and deployment processes. They are designed to
increase the velocity, efficiency, and reliability of software releases, based on the eight
iterative DevOps steps.
Declarative vs Imperative
One of the key concepts in cloud native is the notion of declarative or imperative,
which focuses on two different programming paradigms, in this case to specify the
expected configuration for cloud native systems:
Stateful vs Stateless
Stateful and Stateless are two fundamental concepts in computing and software
design that describe how systems or processes retain information over time and
across interactions. Concretely:
Stateful –
A system or process is considered stateful when it retains or remembers informa‐
tion from one transaction to the next. This “state” information can be used to
influence future transactions, meaning that past interactions can affect future
ones.
Stateless –
A system or process is considered stateless when it doesn’t retain any record of
previous interactions. Each transaction is processed without any memory of past
transactions. Every request is treated as an entirely new operation.
Serverless
Serverless is a cloud native development model that allows developers to build
and run applications without having to manage servers. There are still servers
in serverless, but they are abstracted away from the developer. A cloud provider
handles the routine work of provisioning, maintaining, and scaling the server infra‐
structure. Developers can simply package their code in containers for deployment.
Once deployed, serverless apps respond to demand and automatically scale up and
down as needed. Serverless offerings from public cloud providers are usually metered
on-demand through an event-driven execution model. As a result, when a serverless
function is sitting idle, it doesn’t cost anything.
These are just a few terms that you need to know before starting Chapters 5 and 6,
that will exclusively focus on Kubernetes and its technical details. But before that, we
will get our regular end-of-chapter expert insights, in this case from someone who
knows the KCNA and other Kubernetes certifications very well: Benjamin Muschko,
who is a renown cloud native consultant, and the author of the O’Reilly CKA, CKAD,
and CKS Study Guides.