Professional Documents
Culture Documents
ES-5
Application development and deployment in cloud – Dockers, micro services, Kubernetes, Serverless.
Continuous Integration/Continuous Delivery Introduction to Enterprise Architecture
DOCKERS
Introduction: Cloud computing has transformed the landscape of application development
and deployment, offering unparalleled scalability, flexibility, and efficiency. In this essay, we
will explore how Docker, a leading containerization platform, revolutionizes the development
and deployment of applications in the cloud. We will discuss the key concepts of Docker, its
benefits, and best practices for leveraging Docker in cloud environments.
1. Docker Image: A Docker image is a lightweight, read-only template that contains the
application code, runtime, libraries, and dependencies required to run an application.
Images are used to create container instances.
Page 2 of 14
Docker Client
With the help of the docker client, the docker users can interact with the docker. The docker command uses
the Docker API. The Docker client can communicate with multiple daemons. When a docker client runs any
docker command on the docker terminal then the terminal sends instructions to the daemon. The Docker
daemon gets those instructions from the docker client withinside the shape of the command and REST API’s
request.
The main objective of the docker client is to provide a way to direct the pull of images from the docker
registry and run them on the docker host. The common commands which are used by clients are docker
build, docker pull, and docker run.
Docker Host
Page 3 of 14
A Docker host is a type of machine that is responsible for running more than one container. It comprises the
Docker daemon, Images, Containers, Networks, and Storage.
Docker Registry
All the docker images are stored in the docker registry. There is a public registry which is known as a docker
hub that can be used by anyone. We can run our private registry also. With the help of docker run or docker
pull commands, we can pull the required images from our configured registry. Images are pushed into
configured registry with the help of the docker push command.
Docker Objects
Whenever we are using a docker, we are creating and use images, containers, volumes, networks, and other
objects. Now, we are going to discuss docker objects:-
Docker Images
An image contains instructions for creating a docker container. It is just a read-only template. It is used to
store and ship applications. Images are an important part of the docker experience as they enable
collaboration between developers in any way which is not possible earlier.
Docker Containers
Containers are created from docker images as they are ready applications. With the help of Docker API or
CLI, we can start, stop, delete, or move a container. A container can access only those resources which are
defined in the image unless additional access is defined during the building of an image in the container.
Docker Storage
We can store data within the writable layer of the container but it requires a storage driver. Storage
driver controls and manages the images and containers on our docker host.
Microservices
Introduction: The advent of cloud computing has transformed the landscape of application development
and deployment, offering unprecedented scalability, agility, and cost-efficiency. One of the key paradigms
driving this transformation is microservices architecture. In this essay, we will explore the evolution of
application development and deployment in the cloud, focusing specifically on the role of microservices. We
will discuss the principles of microservices architecture, its benefits, challenges, and best practices for
successful adoption.
composed of loosely coupled, independently deployable services, each responsible for specific business
functions. These services communicate via lightweight protocols such as HTTP/REST or messaging queues.
1. Decomposition: Applications are decomposed into smaller, manageable services, each focused on a
specific business capability.
2. Loose Coupling: Services are loosely coupled, allowing them to be developed, deployed, and scaled
independently without affecting other services.
3. Autonomy: Each service is autonomous, with its own data store and deployment pipeline, enabling
teams to work independently and iterate quickly.
4. Resilience: Microservices promote resilience by isolating failures to individual services, preventing
cascading failures and minimizing downtime.
5. Scalability: Services can be scaled independently based on demand, allowing for efficient resource
utilization and improved performance.
Benefits of Microservices:
Challenges of Microservices:
KUBERNETES
Introduction to Kubernetes (K8S)
Kubernetes is an open-source platform that manages Docker containers in the form of a cluster. Along with
the automated deployment and scaling of containers, it provides healing by automatically restarting failed
containers and rescheduling them when their hosts die. This capability improves the application’s
availability.
Kubernetes is an open-source Container Management tool that automates container deployment, container
scaling, descaling, and container load balancing (also called a container orchestration tool). It is written in
Golang and has a vast community because it was first developed by Google and later donated to CNCF (Cloud
Native Computing Foundation). Kubernetes can group ‘n’ number of containers into one logical unit for
managing and deploying them easily. It works brilliantly with all cloud vendors i.e. public, hybrid, and on-
premises.
Kubernetes is continually evolving to compete with other container orchestration platforms. As it progresses,
Kubernetes is poised to play a significant role in shaping the future of technology. Key trends shaping
Kubernetes include AI-powered automation, edge computing, data governance, multi-cloud applications,
security, and resource optimization.
Features of Kubernetes
Kubernetes Vs Docker
Architecture of Kubernetes
Kubernetes follows a client-server architecture where the master is installed on one machine and the nodes
on separate Linux machines. It utilizes a master-slave model to manage Docker containers across multiple
Kubernetes nodes, forming a "Kubernetes cluster". Developers deploy applications in Docker containers with
the assistance of the Kubernetes master.
Page 7 of 14
This streamlined architecture of Kubernetes facilitates efficient container management, networking, and
communication between master and worker nodes, ensuring the smooth operation of containerized
applications.
SERVERLESS ARCHITECHTURE
Case Studies and Use Cases:
1. Chatbots and Voice Assistants: Serverless architecture is well-suited for building conversational
interfaces such as chatbots and voice assistants. Organizations can leverage serverless functions to
process natural language queries, invoke external APIs, and orchestrate backend services to deliver
interactive and personalized user experiences across various channels.
Serverless Architecture is an approach in cloud computing that enables developers to build and run services
without the need to manage the underlying infrastructure.
While your application still runs on a server, the cloud provider handles all server management
and infrastructure tasks, such as provisioning servers, managing operating systems, and allocating resources.
Consequently, developers can write and deploy code without having to deal with computing resource
management or server management.
In Serverless Architecture, understanding certain fundamental terms is crucial as they shape the framework
for grasping the dynamics and functionality of serverless systems. These key terms play a significant role in
defining the structure and operation of serverless computing:
Event: Triggers the execution of a function, originating from various sources like HTTP requests,
database changes, file uploads, timers, or external services, making Serverless applications event-
driven.
Stateless: Denotes functions that do not maintain state or memory between invocations, allowing
for easy scalability and distribution.
Cold Start: Describes the delay during the initial invocation or after a period of inactivity, resulting in
longer response times compared to “warm” executions.
Warm Execution: Refers to a function already invoked with allocated resources and an initialized
runtime environment, leading to faster execution.
Concurrency Limit: Specifies the number of instances running simultaneously in one region,
determined by the cloud provider.
Now that we have a grasp of what Serverless Architecture is and the common terminologies associated with
it, let’s delve deeper into its operation.
Serverless systems are designed to execute specific functions, which are offered by cloud providers as part
of the Function-as-a-Service (FaaS) model. The process follows these steps:
Each function performs a specific task when triggered by an event. The event triggers the cloud
service provider to execute the function.
If the defined event is an HTTP request, it is triggered by a user through actions like clicking or sending
an email.
When the function is invoked, the cloud service provider determines whether it needs to run on an
already active server. If not, it launches a new server.
Once this is complete, the user will see the output of the function.
These execution processes operate in the background, allowing developers to write and deploy their
application code.
Scalability: Serverless applications automatically scale up or down based on the incoming workload,
ensuring they can handle fluctuating traffic without manual intervention.
Cost Efficiency: Pay-as-you-go pricing means payment is only for the resources consumed during
function executions. There are no ongoing costs for idle resources, making it cost-effective, especially
for sporadically used applications.
Rapid Development: Serverless promotes quick development and deployment. Developers can write
and deploy functions swiftly, allowing for faster time-to-market for new features or applications.
Granularity: Functions in Serverless applications are highly granular, enabling modular, maintainable
code. Each function focuses on a specific task or service.
Page 10 of 14
Event-Driven Flexibility: Serverless is well-suited for event-driven applications, making it ideal for
use cases such as real-time analytics, chatbots, IoT solutions, and more.
While Serverless offers numerous advantages, it comes with challenges. Some of the biggest limitations of
Serverless Architecture include:
Vendor Lock-In: Serverless platforms are typically offered by specific cloud providers, making it
difficult to switch providers without significant code changes, resulting in vendor lock-in.
Limited Function Execution Time: Serverless platforms impose execution time limits on functions,
typically ranging from a few seconds to a few minutes. This constraint can be challenging for long-
running tasks.
Potentially Higher Costs: While Serverless can be cost-effective for many use cases, it may result in
higher costs for applications with consistently high and predictable workloads. In such cases,
traditional server infrastructure is preferred.
Continuous Integration is the practice of frequently integrating code changes from multiple developers into a
shared repository, where automated builds and tests are performed. The main goals of CI are to detect
integration errors early, maintain a consistent codebase, and accelerate the feedback loop for developers.
In a CI process:
1. Developers regularly commit their code changes to a version control system, such as Git.
2. Each commit triggers an automated build process that compiles the code, runs unit tests, and performs other
validation checks.
3. If the build is successful and all tests pass, the changes are integrated into the main code repository.
4. If the build fails or tests are unsuccessful, developers are notified, and they can quickly address and fix the
issues.
Continuous Delivery extends the CI process by automating the deployment of code changes to production or
staging environments. The goal of CD is to ensure that software releases can be reliably and efficiently
deployed to production at any time, with minimal manual intervention.
1. After successful integration and testing in the CI phase, the code changes are automatically deployed to
staging or pre-production environments.
2. Automated tests, including integration tests and user acceptance tests, are executed in the staging
environment to validate the functionality and performance of the application.
3. If the tests pass in the staging environment, the code changes are considered ready for release.
4. The final step in Continuous Delivery is manual approval or trigger for deploying the changes to the
production environment. This step can be automated in Continuous Deployment, where changes are
automatically deployed to production without manual intervention.
Benefits of CI/CD:
1. Faster Time-to-Market: CI/CD automates the build, test, and deployment processes, enabling faster
delivery of new features and updates to end-users.
2. Improved Code Quality: Automated testing and validation ensure that code changes are thoroughly tested
and verified before deployment, reducing the risk of introducing bugs or regressions.
3. Increased Collaboration: CI/CD encourages collaboration among development, testing, and operations
teams by providing a shared and transparent development pipeline.
4. Reduced Manual Effort: Automation of repetitive tasks such as building, testing, and deploying code
changes reduces manual effort and minimizes human errors.
5. Enhanced Reliability: Continuous integration and delivery help maintain a stable and reliable software
delivery pipeline, enabling organizations to deliver high-quality software consistently.
Page 12 of 14
1. Version Control System (VCS): A VCS such as Git or Subversion serves as the central repository for storing
source code, enabling collaboration, versioning, and change tracking.
2. Build Automation: CI/CD pipelines automate the process of compiling source code, running tests, and
generating executable artifacts using build automation tools like Jenkins, Travis CI, or CircleCI.
3. Automated Testing: Automated testing frameworks such as JUnit, Selenium, or Jest are used to execute unit
tests, integration tests, and end-to-end tests to validate code changes and ensure software quality.
4. Deployment Automation: CD pipelines automate the deployment of application artifacts to target
environments such as development, staging, or production using deployment automation tools like Docker,
Kubernetes, or AWS CodeDeploy.
5. Monitoring and Feedback: CI/CD pipelines provide visibility into the status of builds, tests, and deployments
through dashboards, notifications, and alerts, enabling teams to monitor performance, track progress, and
identify issues in real-time.
1. Automate Everything: Automate every aspect of the software delivery process, including build, test,
deployment, and monitoring tasks, to ensure consistency, repeatability, and efficiency.
2. Keep Builds Fast: Optimize build times by parallelizing tasks, caching dependencies, and minimizing
unnecessary steps to keep feedback cycles short and enable rapid iteration.
3. Implement Version Control: Use version control systems such as Git to manage source code, configuration
files, and infrastructure definitions, enabling traceability, collaboration, and change management.
4. Use Short-Lived Branches: Adopt a branching strategy that promotes short-lived feature branches and
frequent merges to the mainline, facilitating continuous integration and reducing integration conflicts.
In the era of digital transformation, enterprises are increasingly turning to cloud computing to drive
innovation, enhance agility, and improve operational efficiency. Cloud computing offers a wide array of
benefits, including scalability, flexibility, and cost-effectiveness, making it an attractive option for application
development and deployment. However, as organizations embrace cloud technologies, they must also
consider the broader architectural context within which these technologies operate. This is where enterprise
architecture (EA) comes into play.
1) What are microservices, and how do they differ from monolithic architectures?
Microservices are a software architectural style where an application is composed of multiple small,
independently deployable services, each running its own process and communicating with lightweight
mechanisms, often an HTTP resource API. Each service is responsible for a specific business function and
can be developed, deployed, and scaled independently. Microservices promote modularity, flexibility, and
scalability, making it easier to manage complex systems and accelerate development cycles.
In contrast, Monolithic Architectures consist of a single, self-contained application where all functionality is
grouped together into a single codebase and deployed as a single unit. In a monolithic architecture, different
components of the application, such as user interface, business logic, and data access, are tightly coupled and
run within the same process space. Monolithic architectures are typically easier to develop and deploy initially
but can become difficult to maintain and scale as the application grows in size and complexity.
Here are some key differences between microservices and monolithic architectures:
1. Modularity: Microservices promote modularity by breaking down the application into smaller, self-
contained services, each responsible for a specific business function. In contrast, monolithic architectures are
less modular, with all functionality tightly integrated into a single codebase.
3. Flexibility: Microservices offer flexibility in technology choices, allowing each service to be implemented
using different programming languages, frameworks, and databases based on specific requirements.
Monolithic architectures typically have a single technology stack for the entire application.
4. Deployment: Microservices support independent deployment of services, enabling faster release cycles
and continuous delivery. In contrast, monolithic architectures require deploying the entire application as a
single unit, which can be more complex and time-consuming.
Containers play a crucial role in microservices architecture by providing lightweight, portable, and isolated
environments for deploying and running individual microservices. Here's how containers contribute to
microservices architecture:
1. Isolation: Containers provide a high level of isolation for microservices, ensuring that each service runs in
its own isolated environment with its dependencies and resources. This isolation prevents interference and
conflicts between services, enhancing reliability and security.
2. Portability: Containers encapsulate the entire runtime environment, including the application code,
libraries, and dependencies, making them highly portable across different infrastructure environments.
Microservices packaged as containers can be easily deployed and run consistently on any platform that
supports containerization, such as Kubernetes or Docker Swarm.
5. Resource Efficiency: Containers are lightweight and consume fewer resources compared to virtual
machines, making them more efficient in terms of resource utilization. Multiple containers can run on the
same host without significant overhead, enabling higher density and better utilization of infrastructure
resources.