You are on page 1of 218

Let's get Started

What is DevOps?
DevOps stands for Development and Operations. It is a so ware engineering practice
that focuses on bringing together the development team and the operations team
for the purpose of automating the project at every stage. This approach helps in
easily automating the project service management in order to aid the objectives at
the operational level and improve the understanding of the technological stack used
in the production environment.
This way of practice is related to agile methodology and it mainly focuses on team
communication, resource management, and teamwork. The main benefits of
following this structure are the speed of development and resolving the issues at the
production environment level, the stability of applications, and the innovation
involved behind it.

Page 3 © Copyright by Interviewbit


DevOps Interview Questions

DevOps

DevOps Tools
DevOps is a methodology aimed at increased productivity and quality of product
development. The main tools used in this methodology are:
Version Control System tools. Eg.: git.
Continuous Integration tools. Eg.: Jenkins
Continuous Testing tools. Eg.: Selenium
Configuration Management and Deployment tools. Eg.:Puppet, Chef, Ansible
Continuous Monitoring tool. Eg.: Nagios
Containerization tools. Eg.: Docker

Page 4 © Copyright by Interviewbit


DevOps Interview Questions

DevOps Tools

Organizations that have adopted this methodology are reportedly accomplishing


almost thousands of deployments in a single day thereby providing increased
reliability, stability, and security with increased customer satisfaction.

DevOps Interview Questions For Freshers


1. Who is a DevOps engineer?
A DevOps engineer is a person who works with both so ware developers and the IT
staff to ensure smooth code releases. They are generally developers who develop an
interest in the deployment and operations domain or the system admins who
develop a passion for coding to move towards the development side.
In short, a DevOps engineer is someone who has an understanding of SDLC (So ware
Development Lifecycle) and of automation tools for developing CI/CD pipelines.

2. Why DevOps has become famous?

Page 5 © Copyright by Interviewbit


DevOps Interview Questions

These days, the market window of products has reduced drastically. We see new
products almost daily. This provides a myriad of choices to consumers but it comes at
a cost of heavy competition in the market. Organizations cant afford to release big
features a er a gap. They tend to ship off small features as releases to the customers
at regular intervals so that their products don't get lost in this sea of competition.
Customer satisfaction is now a motto to the organizations which has also become
the goal of any product for its success. In order to achieve this, companies need to do
the below things:
Frequent feature deployments
Reduce time between bug fixes
Reduce failure rate of releases
Quicker recovery time in case of release failures.
In order to achieve the above points and thereby achieving seamless product
delivery, DevOps culture acts as a very useful tool. Due to these advantages,
multi-national companies like Amazon and Google have adopted the
methodology which has resulted in their increased performance.

3. What is the use of SSH?


SSH stands for Secure Shell and is an administrative protocol that lets users have
access and control the remote servers over the Internet to work using the command
line.
SSH is a secured encrypted version of the previously known Telnet which was
unencrypted and not secure. This ensured that the communication with the remote
server occurs in an encrypted form.
SSH also has a mechanism for remote user authentication, input communication
between the client and the host, and sending the output back to the client.

4. What is configuration management?

Page 6 © Copyright by Interviewbit


DevOps Interview Questions

Configuration management (CM) is basically a practice of systematic handling of the


changes in such a way that system does not lose its integrity over a period of time.
This involves certain policies, techniques, procedures, and tools for evaluating
change proposals, managing them, and tracking their progress along with
maintaining appropriate documentation for the same.
CM helps in providing administrative and technical directions to the design and
development of the appreciation.
The following diagram gives a brief idea about what CM is all about:

DevOps Configuration Management

5. What is the importance of having configuration management


in DevOps?
Configuration management (CM) helps the team in the automation of time-
consuming and tedious tasks thereby enhancing the organization’s performance and
agility.

Page 7 © Copyright by Interviewbit


DevOps Interview Questions

It also helps in bringing consistency and improving the product development process
by employing means of design streamlining, extensive documentation, control, and
change implementation during various phases/releases of the project.

6. What does CAMS stand for in DevOps?


CAMS stands for Culture, Automation, Measurement, and Sharing. It represents the
core deeds of DevOps.

7. What is Continuous Integration (CI)?


Continuous Integration (CI) is a so ware development practice that makes sure
developers integrate their code into a shared repository as and when they are done
working on the feature. Each integration is verified by means of an automated build
process that allows teams to detect problems in their code at a very early stage
rather than finding them a er the deployment.

Continuous Integration (CI)

Based on the above flow, we can have a brief overview of the CI process.

Page 8 © Copyright by Interviewbit


DevOps Interview Questions

Developers regularly check out code into their local workspaces and work on the
features assigned to them.
Once they are done working on it, the code is committed and pushed to the
remote shared repository which is handled by making use of effective version
control tools like git.
The CI server keeps track of the changes done to the shared repository and it
pulls the changes as soon as it detects them.
The CI server then triggers the build of the code and runs unit and integration
test cases if set up.
The team is informed of the build results. In case of the build failure, the team
has to work on fixing the issue as early as possible, and then the process repeats.

8. Why is Continuous Integration needed?


By incorporating Continuous Integration for both development and testing, it has
been found that the so ware quality has improved and the time taken for delivering
the features of the so ware has drastically reduced.
This also allows the development team to detect and fix errors at the initial stage as
each and every commit to the shared repository is built automatically and run
against the unit and integration test cases.

9. What is Continuous Testing (CT)?


Continuous Testing (CT) is that phase of DevOps which involves the process of
running the automated test cases as part of an automated so ware delivery pipeline
with the sole aim of getting immediate feedback regarding the quality and validation
of business risks associated with the automated build of code developed by the
developers.
Using this phase will help the team to test each build continuously (as soon as the
code developed is pushed) thereby giving the dev teams a chance to get instant
feedback on their work and ensuring that these problems don’t arrive in the later
stages of SDLC cycle.

Page 9 © Copyright by Interviewbit


DevOps Interview Questions

Doing this would drastically speed up the workflow followed by the developer to
develop the project due to the lack of manual intervention steps to rebuild the
project and run the automated test cases every time the changes are made.

10. What are the three important DevOps KPIs?


Few KPIs of DevOps are given below:
Reduce the average time taken to recover from a failure.
Increase Deployment frequency in which the deployment occurs.
Reduced Percentage of failed deployments.

Intermediate Interview Questions


11. Explain the different phases in DevOps methodology.
DevOps mainly has 6 phases and they are:
Planning:
This is the first phase of a DevOps lifecycle that involves a thorough understanding of
the project to ultimately develop the best product. When done properly, this phase
gives various inputs required for the development and operations phases. This phase
also helps the organization to gain clarity regarding the project development and
management process.
Tools like Google Apps, Asana, Microso teams, etc are used for this purpose.
Development:
The planning phase is followed by the Development phase where the project is built
by developing system infrastructure, developing features by writing codes, and then
defining test cases and the automation process. Developers store their codes in a
code manager called remote repository which aids in team collaboration by allowing
view, modification, and versioning of the code.
Tools like git, IDEs like the eclipse, IntelliJ, and technological stacks like Node, Java,
etc are used.
Continuous Integration (CI):

Page 10 © Copyright by Interviewbit


DevOps Interview Questions

This phase allows for automation of code validation, build, and testing. This ensures
that the changes are made properly without development environment errors and
also allows the identification of errors at an initial stage.
Tools like Jenkins, circleCI, etc are used here.
Deployment:
DevOps aids in the deployment automation process by making use of tools and
scripts which has the final goal of automating the process by means of feature
activation. Here, cloud services can be used as a force that assists in upgrade from
finite infrastructure management to cost-optimized management with the potential
to infinite resources.
Tools like Microso Azure, Amazon Web Services, Heroku, etc are used.
Operations:
This phase usually occurs throughout the lifecycle of the product/so ware due to the
dynamic infrastructural changes. This provides the team with opportunities for
increasing the availability, scalability, and effective transformation of the product.
Tools like Loggly, BlueJeans, Appdynamics, etc are used commonly in this phase.
Monitoring:
Monitoring is a permanent phase of DevOps methodology. This phase is used for
monitoring and analyzing information to know the status of so ware applications.
Tools like Nagios, Splunk, etc are commonly used.

12. How is DevOps different than the Agile Methodology?


DevOps is a practice or a culture that allows the collaboration of the development
team and the operations team to come together for successful product
development. This involves making use of practices like continuous development,
integration, testing, deployment, and monitoring of the SDLC cycle.
DevOps tries to reduce the gap between the developers and the operations team for
the effective launch of the product.

Page 11 © Copyright by Interviewbit


DevOps Interview Questions

Agile is nothing but a so ware development methodology that focuses on


incremental, iterative, and rapid releases of so ware features by involving the
customer by means of feedback. This methodology removes the gap between the
requirement understanding of the clients and the developers.

Agile Methodology

13. Differentiate between Continuous Deployment and


Continuous Delivery?
The main difference between Continuous Deployment and Continuous Delivery are
given below:

Page 12 © Copyright by Interviewbit


DevOps Interview Questions

Continuous Deployment Continuous Delivery

The deployment to the In this process, some amount of


production environment manual intervention with the
is fully automated and manager’s approval is needed
does not require manual/ for deployment to a production
human intervention. environment.

Here, the application is


run by following the Here, the working of the
automated set of application depends on the
instructions, and no decision of the team.
approvals are needed.

Continuous Deployment and Continuous Delivery

14. What can you say about antipatterns of DevOps?

Page 13 © Copyright by Interviewbit


DevOps Interview Questions

A pattern is something that is most commonly followed by large masses of entities. If


a pattern is adopted by an organization just because it is being followed by others
without gauging the requirements of the organization, then it becomes an anti-
pattern. Similarly, there are multiple myths surrounding DevOps which can
contribute to antipatterns, they are:
DevOps is a process and not a culture.
DevOps is nothing but Agile.
There should be a separate DevOps group.
DevOps solves every problem.
DevOps equates to developers running a production environment.
DevOps follows Development-driven management
DevOps does not focus much on development.
As we are a unique organization, we don’t follow the masses and hence we won’t
implement DevOps.
We don’t have the right set of people, hence we cant implement DevOps culture.

15. Can you tell me something about Memcached?


Memcached is an open-source and free in-memory object caching system that has
high performance and is distributed and generic in nature. It is mainly used for
speeding the dynamic web applications by reducing the database load.
Memcached can be used in the following cases:
Profile caching in social networking domains like Facebook.
Web page caching in the content aggregation domain.
Profile tracking in Ad targeting domain.
Session caching in e-commerce, gaming, and entertainment domain.
Database query optimization and scaling in the Location-based services domain.
Benefits of Memcached:
Using Memcached speeds up the application processes by reducing the hits to a
database and reducing the I/O access.
It helps in determining what steps are more frequently followed and helps in
deciding what to cache.

Page 14 © Copyright by Interviewbit


DevOps Interview Questions

Some of the drawbacks of using Memcached are:


In case of failure, the data is lost as it is neither a persistent data store nor a
database.
It is not an application-specific cache.
Large objects cannot be cached.

16. What are the various branching strategies used in the


version control system?
Branching is a very important concept in version control systems like git which
facilitates team collaboration. Some of the most commonly used branching types
are:
Feature branching
This branching type ensures that a particular feature of a project is maintained
in a branch.
Once the feature is fully validated, the branch is then merged into the main
branch.
Task branching
Here, each task is maintained in its own branch with the task key being the
branch name.
Naming the branch name as a task name makes it easy to identify what task is
getting covered in what branch.
Release branching

Page 15 © Copyright by Interviewbit


DevOps Interview Questions

This type of branching is done once a set of features meant for a release are
completed, they can be cloned into a branch called the release branch. Any
further features will not be added to this branch.
Only bug fixes, documentation, and release-related activities are done in a
release branch.
Once the things are ready, the releases get merged into the main branch and are
tagged with the release version number.
These changes also need to be pushed into the develop branch which would
have progressed with new feature development.
The branching strategies followed would vary from company to company based on
their requirements and strategies.

17. Can you list down certain KPIs which are used for gauging
the success of DevOps?
KPIs stands for Key Performance Indicators. Some of the popular KPIs used for
gauging the success of DevOps are:
Application usage, performance, and traffic
Automated Test Case Pass Percentage.
Application Availability
Change volume requests
Customer tickets
Successful deployment frequency and time
Error/Failure rates
Failed deployments
Meantime to detection (MTTD)
Meantime to recovery (MTTR)

18. What is CBD in DevOps?


CBD stands for Component-Based Development. It is a unique way for approaching
product development. Here, developers keep looking for existing well-defined,
tested, and verified components of code and relieve the developer of developing
from scratch.

Page 16 © Copyright by Interviewbit


DevOps Interview Questions

19. What is Resilience Testing?


Resilience Testing is a so ware process that tests the application for its behavior
under uncontrolled and chaotic scenarios. It also ensures that the data and
functionality are not lost a er encountering a failure.

20. Can you differentiate between continuous testing and


automation testing?
The difference between continuous testing and automation testing is given below:

Continuous Testing Automation Testing

This is the process of This is a process that replaces


executing all the manual testing by helping the
automated test cases developers create test cases that
and is done as part of can be run multiple times without
the delivery process. manual intervention.

This process focuses This process helps the developer to


on the business risks know whether the features they
associated with have developed are bug-free or not
releasing so ware as by having set of pass/fail points as a
early as possible. reference.

21. Can you say something about the DevOps pipeline?


A pipeline, in general, is a set of automated tasks/processes defined and followed by
the so ware engineering team. DevOps pipeline is a pipeline which allows the
DevOps engineers and the so ware developers to efficiently and reliably compile,
build and deploy the so ware code to the production environments in a hassle free
manner.
Following image shows an example of an effective DevOps pipeline for deployment.

Page 17 © Copyright by Interviewbit


DevOps Interview Questions

The flow is as follows:


Developer works on completing a functionality.
Developer deploys his code to the test environment.
Testers work on validating the feature. Business team can intervene and provide
feedback too.
Developers work on the test and business feedback in continuous collaboration
manner.
The code is then released to the production and validated again.

22. Tell me something about Ansible work in DevOps


It is a DevOps open-source automation tool which helps in modernizing the
development and deployment process of applications in faster manner. It has gained
popularity due to simplicity in understanding, using, and adopting it which largely
helped people across the globe to work in a collaborative manner.

Page 18 © Copyright by Interviewbit


DevOps Interview Questions

Ansible Developers Operations QA

Challenges Operations
team would Quality
Developers require Assurance team
tend to focus uniform would require
a lot of time technology to keep track of
on tooling that can be what has been
rather than used by changed in the
delivering the different feature and
results. skillset when it has
groups been changed.
easily.

Need Operation
Developers Quality
team need a
need to Assurance team
central
respond to need to focus
governing
new on reducing
tool to
features/bugs human error
monitor
and scale the risk as much as
different
efforts based possible for
systems and
on the bug-free
its
demand. product.
workloads.

Page 19 © Copyright by Interviewbit


DevOps Interview Questions

23. How does Ansible work?


Ansible has two types of servers categorized as:
Controlling machines
Nodes
For this to work, Ansible is installed on controlling machine using which the nodes
are managed by means of using SSH. The location of the nodes would be specified
and configured in the inventories of the controlling machine.
Ansible does not require any installations on the remote node servers due its nature
of being agentless. Hence, no background process needs to be executed while
managing any remote nodes.
Ansible can manage lots of nodes from a single controlling system my making use of
Ansible Playbooks through SSH connection. Playbooks are of the YAML format and
are capable to perform multiple tasks.

24. How does AWS contribute to DevOps?


AWS stands for Amazon Web Services and it is a well known cloud provider. AWS
helps DevOps by providing the below benefits:
Flexible Resources: AWS provides ready-to-use flexible resources for usage.
Scaling: Thousands of machines can be deployed on AWS by making use of
unlimited storage and computation power.
Automation: Lots of tasks can be automated by using various services provided
by AWS.
Security: AWS is secure and using its various security options provided under
the hood of Identity and Access Management (IAM), the application
deployments and builds can be secured.

25. What can be a preparatory approach for developing a


project using the DevOps methodology?
The project can be developed by following the below stages by making use of
DevOps:

Page 20 © Copyright by Interviewbit


DevOps Interview Questions

Stage 1: Plan: Plan and come up with a roadmap for implementation by


performing a thorough assessment of the already existing processes to identify
the areas of improvement and the blindspots.
Stage 2: PoC: Come up with a proof of concept (PoC) just to get an idea
regarding the complexities involved. Once the PoC is approved, the actual
implementation work of the project would start.
Stage 3: Follow DevOps: Once the project is ready for implementation, actual
DevOps culture could be followed by making use of its phases like version
control, continuous integration, continuous testing, continuous deployment,
continuous delivery, and continuous monitoring.

DevOps Interview Questions For Experienced


26. Can you explain the “Shi le to reduce failure” concept in
DevOps?
In order to understand what this means, we first need to know how the traditional
SDLC cycle works. In the traditional cycle, there are 2 main sides -
The le side of the cycle consists of the planning, design, and development
phase
The right side of the cycle includes stress testing, production staging, and user
acceptance.
In DevOps, shi ing le simply means taking up as many tasks that usually take place
at the end of the application development process as possible into the earlier stages
of application development. From the below graph, we can see that if the shi le
operations are followed, the chances of errors faced during the later stages of
application development would greatly reduce as it would have been identified and
solved in the earlier stages itself.

Page 21 © Copyright by Interviewbit


DevOps Interview Questions

Shi Le To Reduce Failure

The most popular ways of accomplishing shi le in DevOps is to:

Page 22 © Copyright by Interviewbit


DevOps Interview Questions

Work side by side with the development team while creating the deployment
and test case automation. This is the first and the obvious step in achieving shi
le . This is done because of the well-known fact that the failures that get notices
in the production environment are not seen earlier quite o en. These failures
can be linked directly to:
Different deployment procedures used by the development team while
developing their features.
Production deployment procedures sometimes tend to be way different
than the development procedure. There can be differences in tooling and
sometimes the process might also be manual.
Both the dev team and the operations teams are expected to take ownership to
develop and maintain standard procedures for deployment by making use of the
cloud and the pattern capabilities. This aids in giving the confidence that the
production deployments would be successful.
Usage of pattern capabilities to avoid configurational level inconsistencies in the
different environments being used. This would require the dev team and the
operation team to come together and work in developing a standard process
that guides developers to test their application in the development environment
in the same way as they test in the production environment.

27. Do you know about post mortem meetings in DevOps?


Post Mortem meetings are those that are arranged to discuss if certain things go
wrong while implementing the DevOps methodology. When this meeting is
conducted, it is expected that the team has to arrive at steps that need to be taken in
order to avoid the failure(s) in the future.

28. What is the concept behind sudo in Linux OS?


Sudo stands for ‘superuser do’ where the superuser is the root user of Linux. It is a
program for Linux/Unix-based systems that gives provision to allow the users with
superuser roles to use certain system commands at their root level.

29. Can you explain the architecture of Jenkins?

Page 23 © Copyright by Interviewbit


DevOps Interview Questions

Jenkins follows the master-slave architecture. The master pulls the latest code from
the GitHub repository whenever there is a commitment made to the code. The
master requests slaves to perform operations like build, test and run and produce
test case reports. This workload is distributed to all the slaves in a uniform manner.
Jenkins also uses multiple slaves because there might be chances that require
different test case suites to be run for different environments once the code commits
are done.

Jenkins Architecture

30. Can you explain the “infrastructure as code” (IaC) concept?


As the name indicates, IaC mainly relies on perceiving infrastructure in the same way
as any code which is why it is commonly referred to as “programmable
infrastructure”. It simply provides means to define and manage the IT infrastructure
by using configuration files.

Page 24 © Copyright by Interviewbit


DevOps Interview Questions

This concept came into prominence because of the limitations associated with the
traditional way of managing the infrastructure. Traditionally, the infrastructure was
managed manually and the dedicated people had to set up the servers physically.
Only a er this step was done, the application would have been deployed. Manual
configuration and setup were constantly prone to human errors and inconsistencies.
This also involved increased cost in hiring and managing multiple people ranging
from network engineers to hardware technicians to manage the infrastructural tasks.
The major problem with the traditional approach was decreased scalability and
application availability which impacted the speed of request processing. Manual
configurations were also time-consuming and in case the application had a sudden
spike in user usage, the administrators would desperately work on keeping the
system available for a large load. This would impact the application availability.
IaC solved all the above problems. IaC can be implemented in 2 approaches:
Imperative approach: This approach “gives orders” and defines a sequence of
instructions that can help the system in reaching the final output.
Declarative approach: This approach “declares” the desired outcome first based
on which the infrastructure is built to reach the final result.

31. What is ‘Pair Programming’?


Pair programming is an engineering practice where two programmers work on the
same system, same design, and same code. They follow the rules of “Extreme
Programming”. Here, one programmer is termed as “driver” while the other acts as
“observer” which continuously monitors the project progress to identify any further
problems.

32. What is Blue/Green Deployment Pattern?


A blue-green pattern is a type of continuous deployment, application release pattern
which focuses on gradually transferring the user traffic from a previously working
version of the so ware or service to an almost identical new release - both versions
running on production.
The blue environment would indicate the old version of the application whereas the
green environment would be the new version.

Page 25 © Copyright by Interviewbit


DevOps Interview Questions

The production traffic would be moved gradually from blue to green environment
and once it is fully transferred, the blue environment is kept on hold just in case of
rollback necessity.

In this pattern, the team has to ensure two identical prod environments but only one
of them would be LIVE at a given point of time. Since the blue environment is more
steady, the LIVE one is usually the blue environment.

33. What is Dogpile effect? How can it be prevented?


It is also referred to as cache stampede which can occur when huge parallel
computing systems employing caching strategies are subjected to very high load. It is
referred to as that event that occurs when the cache expires (or invalidated) and
multiple requests are hit to the website at the same time. The most common way of
preventing dogpiling is by implementing semaphore locks in the cache. When the
cache expires in this system, the first process to acquire the lock would generate the
new value to the cache.

34. What are the steps to be undertaken to configure git


repository so that it runs the code sanity checking tooks
before any commits? How do you prevent it from happening
again if the sanity testing fails?

Page 26 © Copyright by Interviewbit


DevOps Interview Questions

Sanity testing, also known as smoke testing, is a process used to determine if it’s
reasonable to proceed to test.
Git repository provides a hook called pre-commit which gets triggered right before a
commit happens. A simple script by making use of this hook can be written to
achieve the smoke test.
The script can be used to run other tools like linters and perform sanity checks on the
changes that would be committed into the repository.
The following snippet is an example of one such script:

#!/bin/sh
files=$(git diff –cached –name-only –diff-filter=ACM | grep ‘.py$’)
if [ -z files ]; then
exit 0
fi
unfmtd=$(pyfmt -l $files)
if [ -z unfmtd ]; then
exit 0
fi
echo “Some .py files are not properly fmt’d”
exit 1

The above script checks if any .py files which are to be committed are properly
formatted by making use of the python formatting tool pyfmt. If the files are not
properly formatted, then the script prevents the changes to be committed to the
repository by exiting with status 1.

35. How can you ensure a script runs every time repository gets
new commits through git push?
There are three means of setting up a script on the destination repository to get
executed depending on when the script has to be triggered exactly. These means are
called hooks and they are of three types:

Page 27 © Copyright by Interviewbit


DevOps Interview Questions

Pre-receive hook: This hook is invoked before the references are updated when
commits are being pushed. This hook is useful in ensuring the scripts related to
enforcing development policies are run.
Update hook: This hook triggers the script to run before any updates are
actually made. This hook is called once for every commit which has been pushed
to the repository.
Post-receive hook: This hook helps trigger the script a er the updates or
changes have been accepted by the destination repository. This hook is ideal for
configuring deployment scripts, any continuous integration-based scripts or
email notifications process to the team, etc.

Conclusion
DevOps is a culture-shi ing practice that has and is continuing to help lots of
businesses and organizations in a tremendous manner. It helps in bridging the gap
between the conflict of goals and priorities of the developers (constant need for
change) and the operations (constant resistance to change) team by creating a
smooth path for Continuous Development and Continuous Integration. Being a
DevOps engineer has huge benefits due to the ever-increasing demand for DevOps
practice.

Page 28 © Copyright by Interviewbit


Links to More Interview
Questions

C Interview Questions Php Interview Questions C Sharp Interview Questions

Web Api Interview Hibernate Interview Node Js Interview Questions


Questions Questions

Cpp Interview Questions Oops Interview Questions Devops Interview Questions

Machine Learning Interview Docker Interview Questions Mysql Interview Questions


Questions

Css Interview Questions Laravel Interview Questions Asp Net Interview Questions

Django Interview Questions Dot Net Interview Questions Kubernetes Interview


Questions

Operating System Interview React Native Interview Aws Interview Questions


Questions Questions

Git Interview Questions Java 8 Interview Questions Mongodb Interview


Questions

Dbms Interview Questions Spring Boot Interview Power Bi Interview Questions


Questions

Pl Sql Interview Questions Tableau Interview Linux Interview Questions


Questions

Ansible Interview Questions Java Interview Questions Jenkins Interview Questions

Page 29 © Copyright by Interviewbit


Each question is accompanied with an answer so that you can prepare for job interview in short time.
We have compiled this list after attending dozens of technical interviews in top-notch companies like- Airbnb, Netflix, Amazon etc.
Often, these questions and concepts are used in our daily work. But these are most helpful when an Interviewer is trying to test your deep
knowledge of DevOps.
Once you go through them in the first pass, mark the questions that you could not answer by yourself. Then, in second pass go through only the
difficult questions.
After going through this book 2-3 times, you will be well prepared to face a technical interview for a DevOps Engineer position.

DevOps Interview Questions

DevOps

1. What are the popular DevOps tools that you use?


We use following tools for work in DevOps:

I. Jenkins : This is an open source automation server used as a continuous integration tool. We can build,
deploy and run automated tests with Jenkins.
II. GIT : It is a version control tool used for tracking changes in files and software.
III. Docker : This is a popular tool for containerization of services. It is very useful in Cloud based deployments.
IV. Nagios : We use Nagios for monitoring of IT infrastructure.
V. Splunk : This is a powerful tool for log search as well as monitoring production systems.
VI. Puppet : We use Puppet to automate our DevOps work so that it is reusable.

2. What are the main benefits of DevOps?


DevOps is a very popular trend in Software Development. Some of the main benefits of DevOps are as follows:

I. Release Velocity : DevOps practices help in increasing the release velocity. We can release code to
production more often and with more confidence.

II. Development Cycle : With DevOps, the complete Development cycle from initial design to production
deployment becomes shorter.

III. Deployment Rollback : In DevOps, we plan for any failure in deployment rollback due to a bug in code or
issue in production. This gives confidence in releasing feature without worrying about downtime for rollback.

IV. Defect Detection : With DevOps approach, we can catch defects much earlier than releasing to production.
It improves the quality of the software.

V. Recovery from Failure : In case of a failure, we can recover very fast with DevOps process.

VI. Collaboration : With DevOps, collaboration between development and operations professionals increases.

VII. Performance-oriented : With DevOps, organization follows performance-oriented culture in which teams
become more productive and more innovative.
3. What is the typical DevOps workflow you use in your organization?
The typical DevOps workflow in our organization is as follows:

I. We use Atlassian Jira for writing requirements and tracking tasks.


II. Based on the Jira tasks, developers checkin code into GIT version control system.
III. The code checked into GIT is built by using Apache Maven.
IV. The build process is automated with Jenkins.
V. During the build process, automated tests run to validate the code checked in by developer.
VI. Code built on Jenkins is sent to organization’s Artifactory.
VII. Jenkins automatically picks the libraries from Artifactory and deploys it to Production.
VIII. During Production deployment Docker images are used to deploy same code on multiple hosts.
IX. Once code is deployed to Production, we use Nagios to monitor the health of production servers.
X. Splunk based alerts inform us of any issues or exceptions in production.

4. How do you take DevOps approach with Amazon Web Services?


Amazon Web Services (AWS) provide many tools and features to deploy and manage applications in AWS. As per DevOps,
we treat infrastructure as code. We mainly use following two services from AWS for DevOps:

I. CloudFormation : We use AWS CloudFormation to create and deploy AWS resources by using templates.
We can describe our dependencies and pass special parameters in these templates. CloudFormation can read
these templates and deploy the application and resources in AWS cloud.

II. OpsWorks : AWS provides another service called OpsWorks that is used for configuration management by
utilizing Chef framework. We can automate server configuration, deployment and management by using
OpsWorks. It helps in managing EC2 instances in AWS as well as any on-premises servers.

5. How will you run a script automatically when a developer commits a


change into GIT?
GIT provides the feature to execute custom scripts when certain event occurs in GIT. This feature is called hooks.

We can write two types of hooks.


I. Client-side hooks
II. Server-side hooks

For this case, we can write a Client-side post-commit hook. This hook will execute a custom script in which we can add the
message and code that we want to run automatically with each commit.

6. What are the main features of AWS OpsWorks Stacks?


Some of the main features of AWS OpsWorks Stacks are as follows:
I. Server Suppo rt: AWS OpsWorks Stacks we can automate operational tasks on any server in AWS as well
as our own data center.
II. Scalable Automation : We get automated scaling support with AWS OpsWorks Stacks. Each new instance
in AWS can read configuration from OpsWorks. It can even respond to system events in same way as other
instances do.
III. Dashboard : We can create dashboards in OpsWorks to display the status of all the stacks in AWS.
IV. Configuration as Code : AWS OpsWorks Stacks are built on the principle of “Configuration as Code”. We
can define and maintain configurations like application source code. Same configuration can be replicated on
multiple servers and environments.
V. Application Support : OpsQorks supports almost all kinds of applications. So it is universal in nature.

7. How does CloudFormation work in AWS?


AWS CloudFormation is used for deploying AWS resources.
In CloudFormation, we have to first create a template for a resource. A template is a simple text file that contains information
about a stack on AWS. A stack is a collection of AWS resourced that we want to deploy together in an AWS as a group.

Once the template is ready and submitted to AWS, CloudFormation will create all the resources in the template. This helps in
automation of building new environments in AWS.

8. What is CICD in DevOps?


CICD stands for Continuous Integration and Continuous Delivery. These are two different concepts that are complementary to
each other.

Continuous Integration (CI) : In CI all the developer work is merged to main branch several times a day. This helps in
reducing integration problems.

In CI we try to minimize the duration for which a branch remains checked out. A developer gets early feedback on the new
code added to main repository by using CI.

Continuous Delivery (CD) : In CD, a software team plans to deliver software in short cycles. They perform development,
testing and release in such a short time that incremental changes can be easily delivered to production.

In CD, as a DevOps we create a repeatable deployment process that can help achieve the objective of Continuous Delivery.

9. What are the best practices of Continuous Integration (CI)?

Some of the best practices of Continuous Integration (CI) are as follows:

I. Build Automation : In CI, we create such a build environment that even with one command build can be
triggered. This automation is done all the way up to deployment to Production environment.
II. Main Code Repository : In CI, we maintain a main branch in code repository that stores all the Production
ready code. This is the branch that we can deploy to Production any time.
III. Self-testing build : Every build in CI should be self-tested. It means with every build there is a set of tests that
runs to ensure that changes are of high quality.
IV. Every day commits to baseline : Developers will commit all of theirs changes to baseline everyday. This
ensures that there is no big pileup of code waiting for integration with the main repository for a long time.
V. Build every commit to baseline : With Automated Continuous Integration, every time a commit is made into
baseline, a build is triggered. This helps in confirming that every change integrates correctly.
VI. Fast Build Process : One of the requirements of CI is to keep the build process fast so that we can quickly
identify any problem.
VII. Production like environment testing : In CI, we maintain a production like environment also known as pre-
production or staging environment, which is very close to Production environment. We perform testing in this
environment to check for any integration issues.
VIII. Publish Build Results : We publish build results on a common site so that everyone can see these and take
corrective actions.
IX. Deployment Automation : The deployment process is automated to the extent that in a build process we can
add the step of deploying the code to a test environment. On this test environment all the stakeholders can
access and test the latest delivery.

10. What are the benefits of Continuous Integration (CI)?


The benefits of Continuous Integration (CI) are as follows:
I. CI makes the current build constantly available for testing, demo and release purpose.
II. With CI, developers write modular code that works well with frequent code check-ins.
III. In case of a unittest failure or bug, developer can easily revert back to the bug-free state of the code.
IV. There is drastic reduction in chaos on release day with CI practices.
V. With CI, we can detect Integration issues much earlier in the process.
VI. Automated testing is one very useful side effect of implementing CI.
VII. All the stakeholders including business partners can see the small changes deployed into pre-production
environment. This provides early feedback on the changes to software.
VIII. Automated CI and testing generates metrics like code-coverage, code complexity that help in improving the
development process.
11. What are the options for security in Jenkins?
In Jenkins, it is very important to make the system secure by setting user authentication and authorization. To do this we have
to do following:

I. First we have to set up the Security Realm. We can integrate Jenkins with LDAP server to create user
authentication.
II. Second part is to set the authorization for users. This determines which user has access to what resources.

In Jenkins some of the options to setup security are as follows:

I. We can use Jenkins’ own User Database.


II. We can use LDAP plugin to integrate Jenkins with LDAP server.
III. We can also setup Matrix based security on Jenkins.

12. What are the main benefits of Chef?


Chef is an automation tool for keeping infrastructure as code. It has many benefits. Some of these are as follows:

I. Cloud Deployment : We can use Chef to perform automated deployment in Cloud environment.

II. Multi-cloud support : With Chef we can even use multiple cloud providers for our infrastructure.

III. Hybrid Deployment : Chef supports both Cloud based as well as datacenter-based infrastructure.

IV. High Availability : With Chef automation, we can create high availability environment. In case of hardware
failure, Chef can maintain or start new servers in automated way to maintain highly available environment.

13. What is the architecture of Chef?


Chef is composed of many components like Chef Server, Client etc. Some of the main components in Chef are as follows:

I. Client : These are the nodes or individual users that communicate with Chef server.
II. Chef Manage : This is the web console that is used for interacting with Chef Server.
III. Load Balancer : All the Chef server API requests are routed through Load Balancer. It is implemented in
Nginx.
IV. Bookshelf : This is the component that stores cookbooks. All the cookbooks are stored in a repository. It is
separate storage from the Chef server.
V. PostgreSQL : This is the data repository for Chef server.
VI. Chef Server : This is the hub for configuration data. All the cookbooks and policies are stored in it. It can
scale to the size of any enterprise.

14. What is a Recipe in Chef?


In any organization, Recipe is the most fundamental configuration element.
It is written in Ruby language. It is a collection of resources defined by using patterns.

A Recipe is stored in a Cookbook and it may have dependency on other Recipe.

We can tag Recipe to create some kind of grouping.

We have to add a Recipe in run-list before using it by chef-client.

It always maintains the execution order specified in run-list.

15. What are the main benefits of Ansible?


Ansible is a powerful tool for IT Automation for large scale and complex deployments. It increases the productivity of team.
Some of the main benefits of Ansible are as follows:
I. Productivity : It helps in delivering and deploying with speed. It increases productivity in an organization.

II. Automation : Ansible provides very good options for automation. With automation, people can focus on
delivering smart solutions.

III. Large-scale : Ansible can be used in small as well as very large-scale organizations.

IV. Simple DevOps : With Ansible, we can write automation in a human-readable language. This simplifies the
task of DevOps.

16. What are the main use cases of Ansible?


Some of the popular use cases of Ansible are as follows:

I. App Deployment : With Ansible, we can deploy apps in a reliable and repeatable way.

II. Configuration Management : Ansible supports the automation of configuration management across multiple
environments.

III. Continuous Delivery : We can release updates with zero downtime with Ansible.

IV. Security : We can implement complex security policies with Ansible.

V. Compliance : Ansible helps in verifying and organization’s systems in comparison with the rules and
regulations.

VI. Provisioning : We can provide new systems and resources to other users with Ansible.

VII. Orchestration : Ansible can be used in orchestration of complex deployment in a simple way.

17. What is Docker Hub?


Docker Hub is a cloud-based registry. We can use Docker Hub to link code repositories. We can even build images and store
them in Docker Hub. It also provides links to Docker Cloud to deploy the images to our hosts.

Docker Hub is a central repository for container image discovery, distribution, change management, workflow automation and
team collaboration.

18. What is your favorite scripting language for DevOps?


In DevOps, we use different scripting languages for different purposes. There is no single language that can work in all the
scenarios. Some of the popular scripting languages that we use are as follows:

I. Bash : On Unix based systems we use Bash shell scripting for automating tasks.

II. Python : For complicated programming and large modules we use Python. We can easily use a wide variety of
standard libraries with Python.

III. Groovy : This is a Java based scripting language. We need JVM installed in an environment to use Groovy. It
is very powerful and it provides very powerful features.

IV. Perl : This is another language that is very useful for text parsing. We use it in web applications.
19. What is Multi-factor authentication?
In security implementation, we use Multi-factor authentication (MFA). In MFA, a user is authenticated by multiple means
before giving access to a resource or service. It is different from simple user/password based authentication.

The most popular implementation of MFA is Two-factor authentication. In most of the organizations, we use
username/password and an RSA token as two factors for authentication.

With MFA, the system becomes more secure and it cannot be easily hacked.

20. What are the main benefits of Nagios?


Nagios is open source software to monitor systems, networks and infrastructure. The main benefits of Nagios are as follows:

I. Monitor : DevOps can configure Nagios to monitor IT infrastructure components, system metrics and
network protocols.

II. Alert : Nagios will send alerts when a critical component in infrastructure fails.

III. Response : DevOps acknowledges alerts and takes corrective actions.

IV. Report : Periodically Nagios can publish/send reports on outages, events and SLAs etc.

V. Maintenance: During maintenance windows, we can also disable alerts.

VI. Planning : Based on past data, Nagios helps in infrastructure planning and upgrades.

21. What is State Stalking in Nagios?


State Stalking is a very useful feature. Though all the users do not use it all the time, it is very helpful when we want to
investigate an issue.

In State Stalking, we can enable stalking on a host. Nagios will monitor the state of the host very carefully and it will log any
changes in the state.

By this we can identify what changes might be causing an issue on the host.

22. What are the main features of Nagios?


Some of the main features of Nagios are as follows:

I. Visibility : Nagios provides a centralized view of the entire IT infrastructure.

II. Monitoring : We can monitor all the mission critical infrastructure components with Nagios.

III. Proactive Planning : With Capacity Planning and Trending we can proactively plan to scale up or scale down
the infrastructure.

IV. Extendable : Nagios is extendable to a third party tools in APIs.

V. Multi-tenant : Nagios supports multi-tenants architecture.

23. What is Puppet?


Puppet Enterprise is a DevOps software platform that is used for automation of infrastructure operations. It runs on Unix as
well as on Windows.

We can define system configuration by using Puppet’s language or Ruby DSL.

The system configuration described in Puppet’s language can be distributed to a target system by using REST API calls.

24. What is the architecture of Puppet?


Puppet is Open Source software. It is based on Client-server architecture. It is a Model Driven system. The client is also
called Agent. And server is called Master.

It has following architectural components:

I. Configuration Language : Puppet provides a language that is used to configure Resources. We have to
specify what Action has to be applied to which Resource.

The Action has three items for each Resource: type, title and list of attributes of a resource. Puppet code is
written in Manifests files.

II. Resource Abstraction : We can create Resource Abstraction in Puppet so that we can configure resources
on different platforms. Puppet agent uses a Facter for passing the information of an environment to Puppet
server. In Facter we have information about IP, hostname, OS etc of the environment.

III. Transaction : In Puppet, Agent sends Facter to Master server. Master sends back the catalog to Client.
Agent applies any configuration changes to system. Once all changes are applied, the result is sent to Server.

25. What are the main use cases of Puppet Enterprise?


We can use Puppet Enterprise for following scenarios:
I. Node Management : We can manage a large number of nodes with Puppet.
II. Code Management : With Puppet we can define Infrastructure as code. We can review, deploy, and test the
environment configuration for Development, Testing and Production environments.
III. Reporting & Visualization : Puppet provides Graphical tools to visualize and see the exact status of
infrastructure configuration.
IV. Provisioning Automation : With Puppet we can automate deployment and creation of new servers and
resources. So users and business can get their infrastructure requirements completed very fast with Puppet.
V. Orchestration : For a large Cluster of nodes, we can orchestrate the complete process by using Puppet. It
can follow the order in which we want to deploy the infrastructure environments.
VI. Automation of Configuration : With Configuration automation, the chances of manual errors are reduced.
The process becomes more reliable with this.

26. What is the use of Kubernetes?


We use Kubernetes for automation of large-scale deployment of Containerized applications.

It is an open source system based on concepts similar to Google’s deployment process of millions of containers.

It can be used on cloud, on-premise datacenter and hybrid infrastructure.

In Kubernetes we can create a cluster of servers that are connected to work as a single unit. We can deploy a containerized
application to all the servers in a cluster without specifying the machine name.

We have to package applications in such a way that they do not depend on a specific host.

27. What is the architecture of Kubernetes?

The architecture of Kubernetes consists of following components:

Master : There is a master node that is responsible for managing the cluster. Master performs following functions in a cluster.
I. Scheduling Applications
II. Maintaining desired state of applications
III. Scaling applications
IV. Applying updates to applications

Nodes : A Node in Kubernetes is responsible for running an application. The Node can be a Virtual Machine or a Computer
in the cluster. There is software called Kubelet on each node. This software is used for managing the node and communicating
with the Master node in cluster.

There is a Kubernetes API that is used by Nodes to communicate with the Master. When we deploy an application on
Kubernetes, we request Master to start application containers on Nodes.

28. How does Kubernetes provide high availability of applications in a


Cluster?

In a Kubernetes cluster, there is a Deployment Controller. This controller monitors the instances created by Kubernetes in a
cluster. Once a node or the machine hosting the node goes down, Deployment Controller will replace the node.

It is a self-healing mechanism in Kubernetes to provide high availability of applications.

Therefore in Kubernetes cluster, Kubernetes Deployment Controller is responsible for starting the instances as well as
replacing the instances in case of a failure.

29. Why Automated Testing is a must requirement for DevOps?


In DevOps approach we release software with high frequency to production. We have to run tests to gain confidence on the
quality of software deliverables.

Running tests manually is a time taking process. Therefore, we first prepare automation tests and then deliver software. This
ensures that we catch any defects early in our process.

30. What is Chaos Monkey in DevOps?

Chaos Monkey is a concept made popular by Netflix. In Chaos Monkey, we intentionally try to shut down the services or
create failures. By failing one or more services, we test the reliability and recovery mechanism of the Production architecture.

It checks whether our applications and deployment have survival strategy built into it or not.

31. How do you perform Test Automation in DevOps?


We use Jenkins to create automated flows to run Automation tests. The first part of test automation is to develop test strategy
and test cases. Once automation test cases are ready for an application, we have to plug these into each Build run.
In each Build we run Unit tests, Integration tests and Functional tests.

With a Jenkins job, we can automate all these tasks. Once all the automated tests pass, we consider the build as green. This
helps in deployment and release processes to build confidence on the application software.

32. What are the main services of AWS that you have used?
We use following main services of AWS in our environment:

I. EC2 : This is the Elastic Compute Cloud by Amazon. It is used to for providing computing capability to a
system. We can use it in places of our standalone servers. We can deploy different kinds of applications on
EC2.
II. S3 : We use S3 in Amazon for our storage needs.

III. DynamoDB : We use DynamoDB in AWS for storing data in NoSQL database form.

IV. Amazon CloudWatch : We use CloudWatch to monitor our application in Cloud.


V. Amazon SNS : We use Simple Notification Service to inform users about any issues in Production
environment.

33. Why GIT is considered better than CVS for version control system?
GIT is a distributed system. In GIT, any person can create its own branch and start checking in the code. Once the code is
tested, it is merged into main GIT repo. IN between, Dev, QA and product can validate the implementation of that code.

In CVS, there is a centralized system that maintains all the commits and changes.
GIT is open source software and there are plenty of extensions in GIT for use by our teams.

34. What is the difference between a Container and a Virtual Machine?


We need to select an Operating System (OS) to get a specific Virtual Machine (VM). VM provides full OS to an application
for running in a virtualized environment.

A Container uses APIs of an Operating System (OS) to provide runtime environment to an application.

A Container is very lightweight in comparison with a VM.

VM provides higher level of security compared to a Container.

A Container just provides the APIs that are required by the application.

35. What is Serverless architecture?


Serverless Architecture is a term that refers to following:

I. An Application that depends on a third-party service.


II. An Application in which Code is run on ephemeral containers.

In AWS, Lambda is a popular service to implement Serverless architecture.

Another concept in Serverless Architecture is to treat code as a service or Function as a Service (FAAS). We just write code
that can be run on any environment or server without the need of specifying which server should be used to run this code.

36. What are the main principles of DevOps?


DevOps is different from Technical Operations. It has following main principles:

I. Incremental : In DevOps we aim to incrementally release software to production. We do releases to


production more often than Waterfall approach of one large release.

II. Automated : To enable use to make releases more often, we automate the operations from Code Check in to
deployment in Production.

III. Collaborative : DevOps is not only responsibility of Operations team. It is a collaborative effort of Dev, QA,
Release and DevOps teams.

IV. Iterative : DevOps is based on Iterative principle of using a process that is repeatable. But with each iteration
we aim to make the process more efficient and better.

V. Self-Service : In DevOps, we automate things and give self-service options to other teams so that they are
empowered to deliver the work in their domain.

37. Are you more Dev or more Ops?


This is a tricky question. DevOps is a new concept and in any organization the maturity of DevOps varies from highly
Operations oriented to highly DevOps oriented. In some projects teams are very mature and practice DevOps in it true form.
In some projects, teams rely more on Operations team.

As a DevOps person I give first priority to the needs of an organization and project. At some times I may have to perform a lot
of operations work. But with each iteration, I aim to bring DevOps changes incrementally to an organization.

Over time, organization/project starts seeing results of DevOps practices and embraces it fully.

38. What is a REST service?


REST is also known as Representational State Transfer. A REST service is a simple software functionality that is available
over HTTP protocol. It is a lightweight service that is widely available due to the popularity of HTTP protocol.

Sine REST is lightweight; it has very good performance in a software system. It is also one of the foundations for creating
highly scalable systems that provide a service to large number of clients.

Another key feature of a REST service is that as long as the interface is kept same, we can change the underlying
implementation. E.g. Clients of REST service can keep calling the same service while we change the implementation from php
to Java.

39. What are the Three Ways of DevOps?


Three Ways of DevOps refers to three basic principles of DevOps culture. These are as follows:

I. The First Way: Systems Thinking : In this principle we see the DevOps as a flow of work from left to right.
This is the time taken from Code check in to the feature being released to End customer. In DevOps culture
we try to identify the bottlenecks in this.

II. The Second Way: Feedback Loops : Whenever there is an issue in production it is a feedback about the
whole development and deployment process. We try to make the feedback loop more efficient so that teams
can get the feedback much faster. It is a way of catching defect much earlier in process than it being reported
by customer.

III. The Third Way: Continuous Learning : We make use of first and second way principles to keep on making
improvements in the overall process. This is the third principle in which over the time we make the process and
our operations highly efficient, automated and error free by continuously improving them.

40. How do you apply DevOps principles to make system Secure?


Security of a system is one of the most important goals for an organization. We use following ways to apply DevOps to
security.

I. Automated Security Testing : We automate and integrate Security testing techniques for Software
Penetration testing and Fuzz testing in software development process.

II. Early Security Checks : We ensure that teams know about the security concerns at the beginning of a
project, rather than at the end of delivery. It is achieved by conducting Security trainings and knowledge
sharing sessions.

III. Standard Process : At DevOps we try to follow standard deployment and development process that has
already gone through security audits. This helps in minimizing the introduction of any new security loopholes
due to change in the standard process.
41. What is Self-testing Code?
Self-testing Code is an important feature of DevOps culture. In DevOps culture, development team members are expected to
write self-testing code. It means we have to write code along with the tests that can test this code. Once the test passes, we
feel confident to release the code.

If we get an issue in production, we first write an automation test to validate that the issue happens in current release. Once the
issue in release code is fixed, we run the same test to validate that the defect is not there. With each release we keep running
these tests so that the issue does not appear anymore.

One of the techniques of writing Self-testing code is Test Driven Development (TDD).

42. What is a Deployment Pipeline?


A Deployment Pipeline is an important concept in Continuous Delivery. In Deployment Pipeline we break the build process
into distinct stages. In each stage we get the feedback to move onto the next stage.

It is a collaborative effort between various groups involved in delivering software development.


Often the first stage in Deployment Pipeline is compiling the code and converting into binaries.

After that we run the automated tests. Depending on the scenario, there are stages like performance testing, security check,
usability testing etc in a Deployment Pipeline.

In DevOps, our aim is to automate all the stages of Deployment Pipeline. With a smooth running Deployment Pipeline, we can
achieve the goal of Continuous Delivery.

43. What are the main features of Docker Hub?


Docker Hub provides following main features:

I. Image Repositories : In Docker Hub we can push, pull, find and manage Docker Images. It is a big library
that has images from community, official as well as private sources.

II. Automated Builds : We can use Docker Hub to create new images by making changes to source
code repository of the image.

III. Webhooks : With Webhooks in Docker Hub we can trigger actions that can create and build new images by
pushing a change to repository.

IV. Github/Bitbucket integration : Docker Hub also provides integration with Github and Bitbucket systems.

44. What are the security benefits of using Container based system?
Some of the main security benefits of using a Container based system are as follows:

I. Segregation : In a Container based system we segregate the applications on different containers. Each
application may be running on same host but in a separate container. Each application has access to ports, files
and other resources that are provided to it by the container.

II. Transient : In a Container based system, each application is considered as a transient system. It is better than
a static system that has fixed environment which can be exposed overtime.

III. Control: We use repeatable scripts to create the containers. This provides us tight control over the software
application that we want to deploy and run. It also reduces the risk of unwanted changes in setup that can
cause security loopholes.

IV. Security Patch: In a Container based system; we can deploy security patches on multiple containers in a
uniform way. Also it is easier to patch a Container with an application update.

45. How many heads can you create in a GIT repository?


There can be any number of heads in a GIT repository.

By default there is one head known as HEAD in each repository in GIT.

46. What is a Passive check in Nagios?


In Nagios, we can monitor hosts and services by active checks. In addition, Nagios also supports Passive checks that are
initiated by external applications.

The results of Passive checks are submitted to Nagios. There are two main use cases of Passive checks:

I. We use Passive checks to monitor asynchronous services that do not give positive result with Active checks at
regular intervals of time.

II. We can use Passive checks to monitor services or applications that are located behind a firewall.

47. What is a Docker container?


A Docker Container is a lightweight system that can be run on a Linux operating system or a virtual machine. It is a package of
an application and related dependencies that can be run independently.

Since Docker Container is very lightweight, multiple containers can be run simultaneously on a single server or virtual machine.

With a Docker Container we can create an isolated system with restricted services and processes. A Container has private
view of the operating system. It has its own process ID space, file system, and network interface.

Multiple Docker Containers can share same Kernel.

48. How will you remove an image from Docker?


We can use docker rmi command to delete an image from our local system.

Exact command is:

% docker rmi <Image Id>

If we want to find IDs of all the Docker images in our local system, we can user docker images command.

% docker images

If we want to remove a docker container then we use docker rm command.

% docker rm <Container Id>

49. What are the common use cases of Docker?


Some of the common use cases of Docker are as follows:

I. Setting up Development Environment : We can use Docker to set the development environment with the
applications on which our code is dependent.
II. Testing Automation Setup : Docker can also help in creating the Testing Automation setup. We can setup
different services and apps with Docker to create the automation-testing environment.
III. Production Deployment : Docker also helps in implementing the Production deployment for an application.
We can use it to create the exact environment and process that will be used for doing the production
deployment.
50. Can we lose our data when a Docker Container exits?
A Docker Container has its own file-system. In an application running on Docker Container we can write to this file-system.
When the container exits, data written to file-system still remains. When we restart the container, same data can be accessed
again.

Only when we delete the container, related data will be deleted.

Docker Questions

51. What is Docker?

Docker is Open Source software. It provides the automation of Linux application deployment in a software container.

We can do operating system level virtualization on Linux with Docker.

Docker can package software in a complete file system that contains software code, runtime environment, system tools, &
libraries that are required to install and run the software on a server.

52. What is the difference between Docker image and Docker container?

Docker container is simply an instance of Docker image.

A Docker image is an immutable file, which is a snapshot of container. We create an image with build command.

When we use run command, an Image will produce a container.

In programming language, an Image is a Class and a Container is an instance of the class.

53. How is a Docker container different from a hypervisor?


In a Hypervisor environment we first create a Virtual Machine and then install an Operating System on it. After that we deploy
the application. The virtual machine may also be installed on different hardware configurations.

In a Docker environment, we just deploy the application in Docker. There is no OS layer in this environment. We specify
libraries, and rest of the kernel is provided by Docker engine.

In a way, Docker container and hypervisor are complementary to each other.

54. Can we write compose file in json file instead of yaml?


Yes. Yaml format is a superset of json format. Therefore any json file is also a valid Yaml file.

If we use a json file then we have to specify in docker command that we are using a json file as follows:

% docker-compose -f docker-compose.json up
55. Can we run multiple apps on one server with Docker?

Yes, theoretically we can run multiples apps on one Docker server. But in practice, it is better to run different components on
separate containers.

With this we get cleaner environment and it can be used for multiple uses.

56. What are the main features of Docker-compose?


Some of the main features of Docker-compose are as follows:

I. Multiple environments on same Host : We can use it to create multiple environments on the same host
server.
II. Preserve Volume Data on Container Creation : Docker compose also preserves the volume data when
we create a container.
III. Recreate the changed Containers : We can also use compose to recreate the changed containers.
IV. Variables in Compose file : Docker compose also supports variables in compose file. In this way we can
create variations of our containers.

57. What is the most popular use of Docker?


The most popular use of Docker is in build pipeline. With the use of Docker it is much easier to automate the development to
deployment process in build pipeline.

We use Docker for the complete build flow from development work, test run and deployment to production environment.

58. What is the role of open source development in the popularity of


Docker?
Since Linux was an open source operating system, it opened new opportunities for developers who want to contribute to open
source systems.

One of the very good outcomes of open source software is Docker. It has very powerful features.

Docker has wide acceptance due to its usability as well as its open source approach of integrating with different systems.

59. What is the difference between Docker commands: up, run and start?
We have up and start commands in docker-compose. The run command is in docker.

a. Up : We use this command to build, create, start or restart all the services in a docker-compose.yml file. It
also attaches to containers for a service.

This command can also start linked services.

b. Run : We use this command for adhoc requests. It just starts the service that we specifically want to start.
We generally use it run specific tests or any administrative tasks.
c. Start : This command is used to start the container that were previously created but are not currently
running. This command does not create new containers.

60. What is Docker Swarm?


Docker Swarm is used to create a cluster environment. It can turn a group of Docker engines into a Single virtual Docker
Engine. This creates a system with pooled resources. We can use Docker Swarm to scale our application.

61. What are the features of Docker Swarm?


Some of the key features of Docker Swarm are as follows:

I. Compatible : Docker Swarm is compatible with standard Docker API.


II. High Scalability : Swarm can scale up to as much as 1000 nodes and 50000 containers. There is almost no
performance degradation at this scale in Docker Swarm.
III. Networking : Swarm comes with support for Docker Networking.
IV. High Availability : We can create a highly available system with Docker Swarm. It allows use to create
multiple master nodes so that in case of a failure, another node can take over.
V. Node Discovery : In Docker Swarm, we can add more nodes and the new nodes can be found with any
discovery service like etcd or zookeeper etc.

62. What is a Docker Image?

Docker Image is the blue print that is used to create a Docker Container. Whenever we want to run a container we have to
specify the image that we want to run.

There are many Docker images available online for standard software. We can use these images directly from the source.

The standard set of Docker Images is stored in Docker Hub Registry. We can download these from this location and use it in
our environment.

We can also create our own Docker Image with the software that we want to run as a container.

63. What is a Docker Container?


A Docker Container is a lightweight system that can be run on a Linux operating system or a virtual machine. It is a package of
an application and related dependencies that can be run independently.

Since Docker Container is very lightweight, multiple containers can be run simultaneously on a single server or virtual machine.

With a Docker Container we can create an isolated system with restricted services and processes. A Container has private
view of the operating system. It has its own process ID space, file system, and network interface.

Multiple Docker Containers can share same Kernel.

64. What is Docker Machine?


We can use Docker Machine to install Docker Engine on virtual hosts. It also provides commands to manage virtual hosts.

Some of the popular Docker machine commands enable us to start, stop, inspect and restart a managed host.

Docker Machine provides a Command Line Interface (CLI), which is very useful in managing multiple hosts.
65. Why do we use Docker Machine?
There are two main uses of Docker Machine:

I. Old Desktop : If we have an old desktop and we want to run Docker then we use Docker Machine to run
Docker. It is like installing a virtual machine on an old hardware system to run Docker engine.

II. Remote Hosts : Docker Machine is also used to provision Docker hosts on remote systems. By using
Docker Machine you can install Docker Engine on remote hosts and configure clients on them.

66. How will you create a Container in Docker?


To create a Container in Docker we have to create a Docker Image. We can also use an existing Image from Docker Hub
Registry.

We can run an Image to create the container.

67. Do you think Docker is Application-centric or Machine-centric?


Docker is an Application-centric solution. It is optimized for deployment of an application. It does not replace a machine by
creating a virtual machine. Rather, it focuses on providing ease of use features to run an application.

68. Can we run more than one process in a Docker container?


Yes, a Docker Container can provide process management that can be used to run multiple processes. There are process
supervisors like runit, s6, daemontools etc that can be used to fork additional processes in a Docker container.

69. What are the objects created by Docker Cloud in Amazon Web
Services (AWS) EC2?

Docker Cloud creates following objects in AWS EC2 instance:

I. VPC : Docker Cloud creates a Virtual Private Cloud with the tag name dc-vpc. It also creates Class Less
Inter-Domain Routing (CIDR) with the range of 10.78.0.0/16 .

II. Subnet : Docker Cloud creates a subnet in each Availability Zone (AZ). In Docker Cloud, each subnet
is tagged with dc-subnet.

III. Internet Gateway : Docker Cloud also creates an internet gateway with name dc-gateway and attaches it
to the VPC created earlier.

IV. Routing Table : Docker Cloud also creates a routing table named dc-route-table in Virtual Private Cloud. In
this Routing Table Docker Cloud associates the subnet with the Internet Gateway.

70. How will you take backup of Docker container volumes in AWS S3?
We can use a utility named Dockup provided by Docker Cloud to take backup of Docker container volumes in S3.

71. What are the three main steps of Docker Compose?


Three main steps of Docker Compose are as follows:
I. Environment : We first define the environment of our application with a Dockerfile. It can be used to recreate
the environment at a later point of time.

II. Services : Then we define the services that make our app in docker-compose.yml. By using this file we
can define how these services can be run together in an environment.

III. Run : The last step is to run the Docker Container. We use docker-compose up to start and run the
application.

72. What is Pluggable Storage Driver architecture in Docker based


containers?
Docker storage driver is by default based on a Linux file system. But Docker storage driver also has provision to plug in any
other storage driver that can be used for our environment.

In Pluggable Storage Driver architecture, we can use multiple kinds of file systems in our Docker Container. In Docker info
command we can see the Storage Driver that is set on a Docker daemon.

We can even plug in shared storage systems with the Pluggable Storage Driver architecture.

73. What are the main security concerns with Docker based containers?
Docker based containers have following security concerns:

I. Kernel Sharing : In a container-based system, multiple containers share same Kernel. If one container causes
Kernel to go down, it will take down all the containers. In a virtual machine environment we do not have this
issue.

II. Container Leakage : If a malicious user gains access to one container, it can try to access the other
containers on the same host. If a container has security vulnerabilities it can allow the user to access other
containers on same host machine.

III. Denial of Service : If one container occupies the resources of a Kernel then other containers will starve for
resources. It can create a Denial of Service attack like situation.

IV. Tampered Images : Sometimes a container image can be tampered. This can lead to further security
concerns. An attacker can try to run a tampered image to exploit the vulnerabilities in host machines and
other containers.

V. Secret Sharing : Generally one container can access other services. To access a service it requires a Key or
Secret. A malicious user can gain access to this secret. Since multiple containers share the secret, it may lead
to further security concerns.

74. How can we check the status of a Container in Docker?

We can use docker ps –a command to get the list of all the containers in Docker. This command also returns the status of these containers.

75. What are the main benefits of using Docker?


Docker is a very powerful tool. Some of the main benefits of using Docker are as follows:
I. Utilize Developer Skills : With Docker we maximize the use of Developer skills. With Docker there is less need of build or
release engineers. Same Developer can create software and wrap it in one single file.
II. Standard Application Image : Docker based system allows us to bundle the application software and Operating system files in a
single Application Image that can be deployed independently.
III. Uniform deployment : With Docker we can create one package of our software and deploy it on different platforms seamlessl y
.

76. How does Docker simplify Software Development process?

Prior to Docker, Developers would develop software and pass it to QA for testing and then it is sent to Build & Release team for deployment.
In Docker workflow, Developer builds an Image after developing and testing the software. This Image is shipped to Registry. From Registry it is
available for deployment to any system. The development process is simpler since steps for QA and Deployment etc take place before the Image
is built. So Developer gets the feedback early.

77. What is the basic architecture behind Docker?

Docker is built on client server model. Docker server is used to run the images. We use Docker client to communicate with Docker server.
Clients tell Docker server via commands what to do.
Additionally there is a Registry that stores Docker Images. Docker Server can directly contact Registry to download images.

78. What are the popular tasks that you can do with Docker Command
line tool?

Docker Command Line (DCL) tool is implemented in Go language. It can compile and run on most of the common operating systems. Some of
the tasks that we can do with Docker Command Line tool are as follows:
I. We can download images from Registry with DCL.
II. We can start, stop or terminate a container on a Docker server by DCL.
III. We can retrieve Docker Logs via DCL.
IV. We can build a Container Image with DCL.

79. What type of applications- Stateless or Stateful are more suitable for
Docker Container?

It is preferable to create Stateless application for Docker Container. We can create a container out of our application and take out the configurable
state parameters from application. Now we can run same container in Production as well as QA environments with different parameters. This helps
in reusing the same Image in different scenarios. Also a stateless application is much easier to scale with Docker Containers than a stateful
application.

80. How can Docker run on different Linux distributions?

Docker directly works with Linux kernel level libraries. In every Linux distribution, the Kernel is same. Docker containers share same kernel as the
host kernel.
Since all the distributions share the same Kernel, the container can run on any of these distributions.

81. Why do we use Docker on top of a virtual machine?

Generally we use Docker on top of a virtual machine to ensure isolation of the application. On a virtual machine we can get the advantage of
security provided by hypervisor. We can implement different security levels on a virtual machine. And Docker can make use of this to run the
application at different security levels.

82. How can Docker container share resources?

We can run multiple Docker containers on same host. These containers can share Kernel resources. Each container runs on its own Operating
System and it has its own user-space and libraries.
So in a way Docker container does not share resources within its own namespace. But the resources that are not in isolated namespace are shared
between containers. These are the Kernel resources of host machine that have just one copy.
So in the back-end there is same set of resources that Docker Containers share.

83. What is the difference between Add and Copy command in a


Dockerfile?

Both Add and Copy commands of Dockerfile can copy new files from a source location to a destination in Container’s file path.
They behave almost same.
The main difference between these two is that Add command can also read the files from a URL.
As per Docker documentation, Copy command is preferable. Since Copy only supports copying local files to a Container, it is preferred over Add
command.

84. What is Docker Entrypoint?

We use Docker Entrypoint to set the starting point for a command in a Docker Image.
We can use the entrypoint as a command for running an Image in the container.
E.g. We can define following entrypoint in docker file and run it as following command:
ENTRYPOINT [“mycmd”]
% docker run mycmd

85. What is ONBUILD command in Docker?

We use ONBUILD command in Docker to run the instructions that have to execute after the completion of current Dockerfile build.
It is used to build a hierarchy of images that have to be build after the parent image is built.
A Docker build will execute first ONBUILD command and then it will execute any other command in Child Dockerfile.

86. What is Build cache in Docker?


When we build an Image, Docker will process each line in Dockerfile. It will execute the commands on each line in the order that is mentioned in
the file.
But at each line, before running any command, Docker will check if there is already an existing image in its cache that can be reused rather than
creating a new image.
This method of using cache in Docker is called Build cache in Docker.
We can also specify the option –no-cache=true to let Docker know that we do not want to use cache for Images. With this option, Docker will
create all new images.

87. What are the most common instructions in Dockerfile?

Some of the common instructions in Dockerfile are as follows:


I. FROM : We use FROM to set the base image for subsequent instructions. In every valid Dockerfile, FROM is the first
instruction.
II. LABEL : We use LABEL to organize our images as per project, module, licensing etc. We can also use LABEL to help in
automation.
In LABEL we specify a key value pair that can be later used for programmatically handling the Dockerfile.
III. RUN : We use RUN command to execute any instructions in a new layer on top of the current image. With each RUN command
we add something on top of the image and use it in subsequent steps in Dockerfile.
IV. CMD : We use CMD command to provide default values of an executing container. In a Dockerfile, if we include multiple CMD
commands, then only the last instruction is used.

88. What is the purpose of EXPOSE command in Dockerfile?

We use EXPOSE command to inform Docker that Container will listen on a specific network port during runtime.
But these ports on Container may not be accessible to the host. We can use –p to publish a range of ports from Container.

89. What are the different kinds of namespaces available in a Container?

In a Container we have an isolated environment with namespace for each resource that a kernel provides. There are mainly six types of
namespaces in a Container.
I. UTS Namespace : UTS stands for Unix Timesharing System. In UTS namespace every container gets its own hostname and
domain name.
II. Mount Namespace : This namespace provides its own file system within a container. With this namespace we get root like / in the
file system on which rest of the file structure is based.
III. PID Namespace : This namespace contains all the processes that run within a Container. We can run ps command to see the
processes that are running within a Docker container.
IV. IPC Namespace : IPC stands for Inter Process Communication. This namespace covers shared memory, semaphores, named
pipes etc resources that are shared by processes. The items in this namespace do not cross the container boundary.
V. User Namespace : This namespace contains the users and groups that are defined within a container.
VI. Network Namespace : With this namespace, container provides its own network resources like- ports, devices etc. With this
namespace, Docker creates an independent network stack within each container.
90. How will you monitor Docker in production?

Docker provides tools like docker stats and docker events to monitor Docker in production.
We can get reports on important statistics with these commands.
Docker stats : When we call docker stats with a container id, we get the CPU, memory usage etc of a container. It is similar to top command in
Linux.
Docker events : Docker events are a command to see the stream of activities that are going on in Docker daemon.
Some of the common Docker events are: attach, commit, die, detach, rename, destroy etc.
We can also use various options to limit or filter the events that we are interested in.

91. What are the Cloud platforms that support Docker?

Some of the popular cloud platforms that support Docker are:


I. Amazon AWS
II. Google Cloud Platform
III. Microsoft Azure
IV. IBM Bluemix

92. How can we control the startup order of services in Docker compose?

In Docker compose we can use the depends_on option to control the startup order of services.
With compose, the services will start in the dependency order. Dependencies can be defined in the options like- depends_on, links, volumes_from,
network_mode etc.
But Docker does not wait for until a container is ready.

93. Why Docker compose does not wait for a container to be ready before
moving on to start next service in dependency order?

The problem with waiting for a container to be ready is that in a Distributed system, some services or hosts may become unavailable sometimes.
Similarly during startup also some services may also be down.
Therefore, we have to build resiliency in our application. So that even if some services are down we can continue our work or wait for the service
to become available again.
We can use wait-for-it or dockerize tools for building this kind of resiliency.

94. How will you customize Docker compose file for different
environments?

In Docker compose there are two files docker-compose.yml and docker-compose.override.yml. We specify our base configuration in docker-
compose.yml file. For any environment specific customization we use docker-compose.override.yml file.
We can specify a service in both the files. Docker compose will merge these files based on following rules:
For single value options, new value replaces the old value.
For multi-value options, compose will concatenate the both set of values.
We can also use extends field to extend a service configuration to multiple environments. With extends, child services can use the common
configuration defined by parent service.

Cloud Computing Questions

95. What are the benefits of Cloud Computing?

There are ten main benefits of Cloud Computing:

I. Flexibility : The businesses that have fluctuating bandwidth demands need the flexibility of Cloud Computing. If you need high
bandwidth, you can scale up your cloud capacity. When you do not need high bandwidth, you can just scale down. There is no
need to be tied into an inflexible fixed capacity infrastructure.
II. Disaster Recovery : Cloud Computing provides robust backup and recovery solutions that are hosted in cloud. Due to this there
is no need to spend extra resources on homegrown disaster recovery. It also saves time in setting up disaster recovery.
III. Automatic Software Updates : Most of the Cloud providers give automatic software updates. This reduces the extra task of
installing new software version and always catching up with the latest software installs.
IV. Low Capital Expenditure : In Cloud computing the model is Pay as you Go. This means there is very less upfront capital
expenditure. There is a variable payment that is based on the usage.
V. Collaboration: In a cloud environment, applications can be shared between teams. This increases collaboration and
communication among team members.
VI. Remote Work: Cloud solutions provide flexibility of working remotely. There is no on site work. One can just connect from
anywhere and start working.
VII. Security: Cloud computing solutions are more secure than regular onsite work. Data stored in local servers and computers is
prone to security attacks. In Cloud Computing, there are very few loose ends. Cloud providers give a secure working environment
to its users.
VIII. Document Control: Once the documents are stored in a common repository, it increases the visibility and transparency among
companies and their clients. Since there is one shared copy, there are fewer chances of discrepancies.
IX. Competitive Pricing: In Cloud computing there are multiple players, so they keep competing among themselves and provide very
good pricing. This comes out much cheaper compared to other options.
X. Environment Friendly: Cloud computing saves precious environmental resources also. By not blocking the resources and
bandwidth.

96. What is On-demand computing in Cloud Computing?

On-demand Computing is the latest model in enterprise systems. It is related to Cloud computing. It means IT resources can be provided on
demand by a Cloud provider.

In an enterprise system demand for computing resources varies from time to time. In such a scenario, On-demand computing makes sure that
servers and IT resources are provisioned to handle the increase/decrease in demand.

A cloud provider maintains a poll of resources. The pool of resources contains networks, servers, storage, applications and services. This pool can
serve the varying demand of resources and computing by various enterprise clients.

There are many concepts like- grid computing, utility computing, autonomic computing etc. that are similar to on-demand computing.

This is the most popular trend in computing model as of now.

97. What are the different layers of Cloud computing?

Three main layers of Cloud computing are as follows:

I. Infrastructure as a Service (IAAS): IAAS providers give low-level abstractions of physical devices. Amazon Web Services
(AWS) is an example of IAAS. AWS provides EC2 for computing, S3 buckets for storage etc. Mainly the resources in this layer
are hardware like memory, processor speed, network bandwidth etc.
II. Platform as a Service (PAAS): PAAS providers offer managed services like Rails, Django etc. One good example of PAAS is
Google App Engineer. These are the environments in which developers can develop sophisticated software with ease.
Developers just focus on developing software, whereas scaling and performance is handled by PAAS provider.
III. Software as a Service (SAAS) : SAAS provider offer an actual working software application to clients. Salesforce and Github
are two good examples of SAAS. They hide the underlying details of the software and just provide an interface to work on the
system. Behind the scenes the version of Software can be easily changed.

98. What resources are provided by Infrastructure as a Service (IAAS)


provider?

An IAAS provider can give physical, virtual or both kinds of resources. These resources are used to build cloud.

IAAS provider handles the complexity of maintaining and deploying these services.

IAAS provider also handles security and backup recovery for these services. The main resources in IAAS are servers, storage, routers, switches
and other related hardware etc.

99. What is the benefit of Platform as a Service?

Platform as a service (PaaS) is a kind of cloud computing service. A PaaS provider offers a platform on which clients can develop, run and
manage applications without the need of building the infrastructure.

In PAAS clients save time by not creating and managing infrastructure environment associated with the app that they want to develop.

100. What are the main advantages of PaaS?


The advantages of PaaS are:

I. It allows development work on higher level programming with very less complexity.
II. Teams can focus on just the development of the application that makes the application very effective.
III. Maintenance and enhancement of the application is much easier.
IV. It is suitable for situations in which multiple developers work on a single project but are not co-located.

101. What is the main disadvantage of PaaS?

Biggest disadvantage of PaaS is that a developer can only use the tools that PaaS provider makes available. A developer cannot use the full range
of conventional tools.

Some PaaS providers lock in the clients in their platform. This also decreases the flexibility of clients using PaaS.

102. What are the different deployment models in Cloud computing?

Cloud computing supports following deployment models:

I. Private Cloud: Some companies build their private cloud. A private cloud is a fully functional platform that is owned, operated
and used by only one organization.
Primary reason for private cloud is security. Many companies feel secure in private cloud. The other reasons for building
private cloud are strategic decisions or control of operations.
There is also a concept of Virtual Private Cloud (VPC). In VPC, private cloud is built and operated by a hosting company.
But it is exclusively used by one organization.

II. Public Cloud: There are cloud platforms by some companies that are open for general public as well as big companies for use and
deployment. E.g. Google Apps, Amazon Web Services etc.

The public cloud providers focus on layers and application like- cloud application, infrastructure management etc. In this model
resources are shared among different organizations.

III. Hybrid Cloud: The combination of public and private cloud is known as Hybrid cloud. This approach provides benefits of both
the approaches- private and public cloud. So it is very robust platform.
A client gets functionalities and features of both the cloud platforms. By using Hybrid cloud an organization can create its own
cloud as well as they can pass the control of their cloud to another third party.

103. What is the difference between Scalability and Elasticity?

Scalability is the ability of a system to handle the increased load on its current hardware and software resources. In a highly scalable system it is
possible to increase the workload without increasing the resource capacity. Scalability supports any sudden surge in the demand/traffic with
current set of resources.

Elasticity is the ability of a system to increase the workload by increasing the hardware/software resources dynamically. Highly elastic systems can
handle the increased demand and traffic by dynamically commission and decommission resources. Elasticity is an important characteristic of Cloud
Computing applications. Elasticity means how well your architecture is adaptable to workload in real time.

E.g. If in a system, one server can handle 100 users, 2 servers can handle 200 users and 10 servers can handle 1000 users. But in case for adding
every X users, if you need 2X the amount of servers, then it is not a scalable design.

Let say, you have just one user login every hour on your site. Your one server can handle this load. But, if suddenly, 1000 users login at once, can
your system quickly start new web servers on the fly to handle this load? Your design is elastic if it can handle such sudden increase in traffic so
quickly.

104. What is Software as a Service?

Software as Service is a category of cloud computing in which Software is centrally hosted and it is licensed on a subscription basis. It is also
known as On-demand software. Generally, clients access the software by using a thin-client like a web browser.

Many applications like Google docs, Microsoft office etc. provide SaaS model for their software.

The benefit of SaaS is that a client can add more users on the fly based on its current needs. And client does not need to install or maintain any
software on its premises to use this software.

105. What are the different types of Datacenters in Cloud computing?

Cloud computing consists of different types of Datacenters linked in a grid structure. The main types of Datacenters in Cloud computing are:

I. Containerized Datacenter

As the name suggests, containerized datacenter provides high level of customization for an organization. These are traditional kind of
datacenters. We can choose the different types of servers, memory, network and other infrastructure resources in this datacenter. Also
we have to plan temperature control, network management and power management in this kind of datacenter.

II. Low-Density Datacenters

In a Low-density datacenter, we get high level of performance. In such a datacenter if we increase the density of servers, the issue with
power comes. With high density of servers, the area gets heated. In such a scenario, effective heat and power management is done. To
reach high level of performance, we have to optimize the number of servers’ in the datacenter.
106. Explain the various modes of Software as a Service (SaaS) cloud
environment?

Software as a Service (SaaS) is used to offer different kinds of software applications in a Cloud environment. Generally these are offered on
subscription basis. Different modes of SaaS are:

I. Simple multi-tenancy : In this setup, each client gets its own resources. These resources are not shared with other clients. It is
more secure option, since there is no sharing of resources. But it an inefficient option, since for each client more money is needed to
scale it with the rising demands. Also it takes time to scale up the application in this mode.

II. Fine grain multi-tenancy : In this mode, the feature provided to each client is same. The resources are shared among multiple
clients. It is an efficient mode of cloud service, in which data is kept private among different clients but computing resources are
shared. Also it is easier and quicker to scale up the SaaS implementation for different clients.

107. What are the important things to care about in Security in a cloud
environment?

In a cloud-computing environment, security is one of the most important aspects.

With growing concern of hacking, every organization wants to make its software system and data secure. Since in a cloud computing environment,
Software and hardware is not on the premises of an organization, it becomes more important to implement the best security practices.

Organizations have to keep their Data most secure during the transfer between two locations. Also they have to keep data secure when it is stored
at a location. Hackers can hack into application or they can get an unauthorized copy of the data. So it becomes important to encrypt the data
during transit as well as during rest to protect it from unwanted hackers.

108. Why do we use API in cloud computing environment?

Application Programming Interfaces (API) is used in cloud computing environment for accessing many services. APIs are very easy to use. They
provide a quick option to create different set of applications in cloud environment.
An API provides a simple interface that can be used in multiple scenarios.

There are different types of clients for cloud computing APIs. It is easier to serve different needs of multiple clients with APIs in cloud computing
environment.

109. What are the different areas of Security Management in cloud?

Different areas of Security management in cloud are as follows:


I. Identity Management : This aspect creates different level of users, roles and their credentials to access the services in cloud.
II. Access Control : In this area, we create multiple levels of permissions and access areas that can be given to a user or role for
accessing a service in cloud environment.

III. Authentication : In this area, we check the credentials of a user and confirm that it is the correct user. Generally this is done by
user password and multi-factor authentication like-verification by a one-time use code on cell phone.

IV. Authorization : In this aspect, we check for the permissions that are given to a user or role. If a user is authorized to access a
service, they are allowed to use it in the cloud environment.

110. What are the main cost factors of cloud based data center?

Costs in a Cloud based data center are different from a traditional data center. Main cost factors of cloud based data center are as follows:

I. Labor cost : We need skilled staff that can work with the cloud-based datacenter that we have selected for our operation. Since
cloud is not a very old technology, it may get difficult to get the right skill people for handling cloud based datacenter.
II. Power cost : In some cloud operations, power costs are borne by the client. Since it is a variable cost, it can increase with the
increase in scale and usage.

III. Computing cost : The biggest cost in Cloud environment is the cost that we pay to Cloud provider for giving us computing
resources. This cost is much higher compared to the labor or power costs.

111. How can we measure the cloud-based services?

In a cloud-computing environment we pay for the services that we use. So main criteria to measure a cloud based service its usage.

For computing resource we measure by usage in terms of time and the power of computing resource.

For a storage resource we measure by usage in terms of bytes (giga bytes) and bandwidth used in data transfer.

Another important aspect of measuring a cloud service is its availability. A cloud provider has to specify the service level agreement (SLA) for the
time for which service will be available in cloud.

112. How a traditional datacenter is different from a cloud environment?

In a traditional datacenter the cost of increasing the scale of computing environment is much higher than a Cloud computing environment. Also in a
traditional data center, there are not much benefits of scaling down the operation when demand decreases. Since most of the expenditure is in
capital spent of buying servers etc., scaling down just saves power cost, which is very less compared to other fixed costs.

Also in a Cloud environment there is no need to higher a large number of operations staff to maintain the datacenter. Cloud provider takes care of
maintaining and upgrading the resources in Cloud environment.
With a traditional datacenter, people cost is very high since we have to hire a large number of technical operation people for in-house datacenter.

113. How will you optimize availability of your application in a Cloud


environment?

In a Cloud environment, it is important to optimize the availability of an application by implementing disaster recovery strategy. For disaster
recovery we create a backup application in another location of cloud environment. In case of complete failure at a data center we use the disaster
recovery site to run the application.

Another aspect of cloud environment is that servers often fail or go down. In such a scenario it is important to implement the application in such a
way that we just kill the slow server and restart another server to handle the traffic seamlessly.

114. What are the requirements for implementing IaaS strategy in Cloud?

Main requirements to implement IAAS are as follows:

I. Operating System (OS): We need an OS to support hypervisor in IaaS. We can use open source OS like Linux for this
purpose.

II. Networking : We have to define and implement networking topology for IaaS implementation. We can use public or private
network for this.

III. Cloud Model : We have to select the right cloud model for implementing IaaS strategy. It can be SaaS, PaaS or CaaS.

115. What is the scenario in which public cloud is preferred over private
cloud?

In a startup mode often we want to test our idea. In such a scenario it makes sense to setup application in public cloud. It is much faster and
cheaper to use public cloud over private cloud.

Remember security is a major concern in public cloud. But with time and changes in technology, even public cloud is very secure.

116. Do you think Cloud Computing is a software application or a


hardware service?
Cloud Computing is neither a software application nor a hardware service. Cloud computing is a system architecture that can be used to implement
software as well as hardware strategy of an organization.

Cloud Computing is a highly scalable, highly available and cost effective solution for software and hardware needs of an application.

Cloud Computing provides great ease of use in running the software in cloud environment. It is also very fast to implement compared with any
other traditional strategy.

117. Why companies now prefer Cloud Computing architecture over


Client Server Architecture?

In Client Server architecture there is one to one communication between client and server. Server is often at in-house datacenter and client can
access same server from anywhere. If client is at a remote location, the communication can have high latency.

In Cloud Computing there can be multiple servers in the cloud. There will be a Cloud controller that directs the requests to right server node. In
such a scenario clients can access cloud-based service from any location and they can be directed to the one nearest to them.

Another reason for Cloud computing architecture is high availability. Since there are multiple servers behind the cloud, even if one server is down,
another server can serve the clients seamlessly.

118. What are the main characteristics of Cloud Computing architecture?

Main characteristics of Cloud Computing architecture are as follows:

I. Elasticity : In Cloud Computing system is highly elastic in the sense that it can easily adapt itself to increase or decrease in load.
There is no need to take urgent actions when there is surge in traffic requests.

II. Self-service provisioning : In Cloud environment users can provision new resources on their own by just calling some APIs.
There is no need to fill forms and order actual hardware from vendors.

III. Automated de-provisioning : In case demand/load decreases, extra resources can be automatically shut down in Cloud
computing environment.

IV. Standard Interface : There are standard interfaces to start, stop, suspend or remove an instance in Cloud environment. Most of
the services are accessible via public and standard APIs in Cloud computing.

V. Usage based Billing : In a Cloud environment, users are charged for their usage of resources. They can forecast their bill and
costs based on the growth they are expecting in their load.
119. How databases in Cloud computing are different from traditional
databases?

In a Cloud environment, companies often use different kind of data to store. There are data like email, images, video, pdf, graph etc. in a Cloud
environment. To store this data often NoSQL databases are used.

A NoSQL database like MongoDB provides storage and retrieval of data that cannot be stored efficiently in a traditional RDBMS.

Database like Neo4J provides features to store graph data like Facebook, LinkedIn etc. in a cloud environment.

Hadoop like database help in storing Big Data based information. It can handle very large-scale information that is generated in a large-scale
environment.

120. What is Virtual Private Network (VPN)?

In a Cloud environment, we can create a virtual private network (VPM) that can be solely used by only one client. This is a secure network in
which data transfer between servers of same VPN is very secure.
By using VPN, an organization uses the public network in a private manner. It increases the privacy of an organization’s data transfer in a cloud
environment.

121. What are the main components of a VPN?

Virtual Private Network (VPN) consists of following main components:

I. Network Access Server (NAS): A NAS server is responsible for setting up tunnels in a VPN that is accesses remotely. It
maintains these tunnels that connect clients to VPN.

II. Firewall : It is the software that creates barrier between VPN and public network. It protects the VPN from malicious activity that
can be done from the outside network.

III. AAA Server : This is an authentication and authorization server that controls the access and usage of VPN. For each request to
use VPN, AAA server checks the user for correct permissions.

IV. Encryption : In a VPN, encryption algorithms protect the important private data from malicious users.

122. How will you secure the application data for transport in a cloud
environment?

With ease of use in Cloud environment comes the important aspect of keeping data secure. Many organizations have data that is transferred from
their traditional datacenter to Cloud datacenter.
During the transit of data it is important to keep it secure. Once of the best way to secure data is by using HTTPS protocol over Secure Socket
Layer (SSL).

Another important point is to keep the data always encrypted. This protects data from being accessed by any unauthorized user during transit.

123. What are the large-scale databases available in Cloud?

In Cloud computing scale is not a limit. So there are very large-scale databases available from cloud providers. Some of these are:

I. Amazon DynamoDB : Amazon Web Services (AWS) provides a NoSQL web service called DynamoDB that provides highly
available and partition tolerant database system. It has a multi-master design. It uses synchronous replication across multiple
datacenters. We can easily integrate it with MapReduce and Elastic MapReduce of AWS.

II. Google Bigtable : This is a very large-scale high performance cloud based database option from Google. It is available on Google
Cloud. It can be scaled to peta bytes. It is a Google proprietary implementation. In Bigtable, two arbitrary string values, row key
and column key, and timestamp are mapped to an arbitrary byte array. In Bigtable MapReduce algorithm is used for modifying and
generating the data.

III. Microsoft Azure SQL Database : Microsoft Azure provides cloud based SQL database that can be scaled very easily for
increased demand. It has very good security features and it can be even used to build multi-tenant apps to service multiple
customers in cloud.

124. What are the options for open source NoSQL database in a Cloud
environment?

Most of the cloud-computing providers support Open Source NoSQL databases. Some of these databases are:
I. Apache CouchDB : It is a document based NoSQL database from Apache Open Source. It is compatible with Couch
Replication Protocol. It can communicate in native JSON and can store binary data very well.

II. HBase : It is a NoSQL database for use with Hadoop based software. It is also available as Open Source from Apache. It is a
scalable and distributed Big Data database.

III. MongoDB : It is an open source database system that offers a flexible data model that can be used to store various kinds of data.
It provides high performance and always-on user experience.

125. What are the important points to consider before selecting cloud
computing?

Cloud computing is a very good option for an organization to scale and outsource its software/hardware needs. But before selecting a cloud
provider it is important to consider following points:
I. Security : One of the most important points is security of the data. We should ask the cloud provider about the options to keep
data secure in cloud during transit and at rest.

II. Data Integrity : Another important point is to maintain the integrity of data in cloud. It is essential to keep data accurate and
complete in cloud environment.

III. Data Loss : In a cloud environment, there are chances of data loss. So we should know the provisions to minimize the data loss. It
can be done by keeping backup of data in cloud. Also there should be reliable data recovery options in case of data loss.

IV. Compliance : While using a cloud environment one must be aware of the rules and regulations that have to be followed to use the
cloud. There compliance issues with storing data of a user in an external provider’s location/servers.

V. Business Continuity : In case of any disaster, it is important to create business continuity plans so that we can provide
uninterrupted service to our end users.

VI. Availability : Another important point is the availability of data and services in a cloud-computing environment. It is very important
to provide high availability for a good customer experience.

VII. Storage Cost : Since data is stored in cloud, it may be very cheap to store the data. But the real cost can come in transfer of data
when we have to pay by bandwidth usage. So storage cost of data in cloud should also include the access cost of data transfer.

VIII. Computing Cost : One of the highest costs of cloud is computing cost. It can be very high cost with the increase of scale. So
cloud computing options should be wisely considered in conjunction with computing cost charged for them.

126. What is a System integrator in Cloud computing?

Often an organization does not know all the options available in a Cloud computing environment. Here comes the role of a System Integrator (SI)
who specializes in implementing Cloud computing environment.

SI creates the strategy of cloud setup. It designs the cloud platform for the use of its client. It creates the cloud architecture for the business need of
client.

SI oversees the overall implementation of cloud strategy and plan. It also guides the client while choosing the right options in cloud computing
platform.

127. What is virtualization in cloud computing?

Virtualization is the core of cloud computing platform. In cloud we can create a virtual version of hardware, storage and operating system that can
be used to deploy the application.

A cloud provider gives options to create virtual machines in cloud that can be used by its clients. These virtual machines are much cheaper than
buying a few high end computing machines.

In cloud we can use multiple cheap virtual machines to implement a resilient software system that can be scaled very easily in quick time. Where as
buying an actual high-end machine to scale the system is very costly and time taking.

128. What is Eucalyptus in a cloud environment?

Eucalyptus is an open source software to build private and hybrid cloud in Amazon Web Services (AWS).

It stands for Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems.

We can create our own datacenter in a private cloud by using Eucalyptus. It makes use of pooling the computing and storage resources to scale up
the operations.

In Eucalyptus, we create images of software applications. These images are deployed to create instances. These instances are used for computing
needs.

A Eucalyptus instance can have both public and private ip addresses.

129. What are the main components of Eucalyptus cloud architecture?

The main components of Eucalyptus cloud architecture are as follows:

I. Cloud Controller (CLC) : This is the controller that manages virtual resources like servers, network and storage. It is at the
highest level in hierarchy. It is a Java program with web interface for outside world. It can do resource scheduling as well as
system accounting. There is only one CLC per cloud. It can handle authentication, accounting, reporting and quota management in
cloud.

II. Walrus : This is another Java program in Eucalyptus that is equivalent to AWS S3 storage. It provides persistent storage. It also
contains images, volumes and snapshots similar to AWS. There is only one Walrus in a cloud.

III. Cluster Controller (CC) : It is a C program that is the front end for a Eucalyptus cloud cluster. It can communicate with Storage
controller and Node controller. It manages the instance execution in cloud.

IV. Storage Controller (SC) : It is a Java program equivalent to EBS in AWS. It can interface with Cluster Controller and Node
Controller to manage persistent data via Walrus.

V. Node Controller (NC) : It is a C program that can host a virtual machine instance. It is at the lowest level in Eucalyptus cloud. It
downloads images from Walrus and creates an instance for computing requirements in cloud.

VI. VMWare Broker : It is an optional component in Eucalyptus. It provides AWS compatible interface to VMWare environment.

130. What is Auto-scaling in Cloud computing?

Amazon Web Services (AWS) provides an important feature called Auto-scaling in the cloud. With Auto-scaling setup we can automatically
provision and start new instances in AWS cloud without any human intervention.

Auto-scaling is triggered based on load and other metrics.

Let say if the load reaches a threshold we can setup auto-scaling to kick in and start a new server to handle additional load.

131. What are the benefits of Utility Computing model?

Utility computing is a cloud service model in which provider gives computing resources to users for using on need basis.

Some of the main benefits of Utility computing are:

I. Pay per use : Since a user pays for only usage, the cost of Utility computing is pay per use. We pay for the number of servers of
instances that we use in cloud.

II. Easy to Scale : It is easier to scale up the operations in Utility computing. There is no need to plan for time consuming and costly
hardware purchase.

III. Maintenance : In Utility computing maintenance of servers is done by cloud provider. So a user can focus on its core business. It
need not spend time and resources on maintenance of servers in cloud.

Utility computing is also known as On-demand computing.

132. What is a Hypervisor in Cloud Computing?

Hypervisor is also known as virtual machine monitor (VMM). It is a computer software/hardware that can create and run virtual machines.

Hypervisor runs on a host machine. Each virtual machine is called Guest machine.

Hypervisor derives its name from term supervisor, which is a traditional name for the kernel of an operating system.

Hypervisor provides a virtual operating platform to the guest operating system. It manages the execution of guest OS.

133. What are the different types of Hypervisor in Cloud Computing?

Hypervisors come in two main types:


I. Type-1, native or bare-metal hypervisors : Type 1 hypervisor runs directly on the hardware of host machine. It controls the
guest operating system from host machine. It is also called bare metal hypervisor or native hypervisor.

Examples of Type-1 are: Xen, Oracle VM Server for SPARC, Oracle VM Server for x86, the Citrix XenServer, Microsoft
Hyper-V and VMware ESX/ESXi.

II. Type-2, hosted hypervisors: Type 2 hypervisor runs like a regular computer program on an operating system. The guest
operating system runs like a process on the host machine. It creates an abstract guest operating system different from the host
operating system.

Examples of Type-2 are: VMware Workstation, VMware Player, VirtualBox, Parallels Desktop for Mac and QEMU are
examples of type-2 hypervisors.

134. Why Type-1 Hypervisor has better performance than Type-2


Hypervisor?

Type-1 Hypervisor has better performance than Type-2 hypervisor because Type-1 hypervisor skips the host operating system and it runs directly
on host hardware. So it can utilize all the resources of host machine.

In cloud computing Type-1 hypervisors are more popular since Cloud servers may need to run multiple operating system images.

135. What is CaaS?

CaaS is also known as Communication as a Service. It is available in Telecom domain. One of the examples for CaaS is Voice Over IP (VoIP).

CaaS offers business features like desktop call control, unified messaging, and fax via desktop.

CaaS also provides services for Call Center automation like- IVR, ACD, call recording, multimedia routing and screen sharing.

136. How is Cloud computing different from computing for mobile


devices?

Since Mobile devices are getting connected to the Internet in large numbers, we often use Cloud computing for Mobile devices.

In mobile applications, there can be sudden increase in traffic as well as usage. Even some applications become viral very soon. This leads to very
high load on application.
In such a scenario, it makes sense to use Cloud Computing for mobile devices.

Also mobile devices keep changing over time, it requires standard interfaces of cloud computing for handling multiple mobile devices.

137. Why automation of deployment is very important in Cloud


architecture?

One of the main reasons for selecting Cloud architecture is scalability of the system. In case of heavy load, we have to scale up the system so that
there is no performance degradation.

While scaling up the system we have to start new instances. To provision new instances we have to deploy our application on them.

In such a scenario, if we want to save time, it makes sense to automate the deployment process. Another term for this is Auto-scaling.

With a fully automated deployment process we can start new instances based on automated triggers that are raised by load reaching a threshold.

138. What are the main components in Amazon Cloud?

Amazon provides a wide range of products in Amazon Web Services for implementing Cloud computing architecture. In AWS some of the main
components are as follows:

I. Amazon EC2 : This is used for creating instances and getting computing power to run applications in AWS.
II. Amazon S3 : This is a Simple Storage Service from AWS to store files and media in cloud.

III. Amazon DynamoDB : It is the database solution by AWS in cloud. It can store very large-scale data to meet needs of even
BigData computing.

IV. Amazon Route53 : This is a cloud based Domain Name System (DNS) service from AWS.

V. Amazon Elastic Load Balancing (ELB): This component can be used to load balance the various nodes in AWS cloud.

VI. Amazon CodeDeploy : This service provides feature to automate the code deployment to any instance in AWS.

139. What are main components in Google Cloud?

Google is a newer cloud alternative than Amazon. But Google provides many additional features than AWS. Some of the main components of
Google Cloud are as follows:
I. Compute Engine : This component provides computing power to Google Cloud users.

II. Cloud Storage : As the name suggests this is a cloud storage solution from Google for storing large files for application use or just
serving over the Internet.

III. Cloud Bigtable : It is a Google proprietary database from Google in Cloud. Now users can use this unique database for creating
their applications.

IV. Cloud Load Balancing : This is a cloud-based load balancing service from Google.

V. BigQuery : It is a data-warehouse solution from Google in Cloud to perform data analytics of large scale.

VI. Cloud Machine Learning Platform : It is a powerful cloud based machine learning product from Google to perform machine
learning with APIs like- Job Search, Text Analysis, Speech Recognition, Dynamic translation etc.

VII. Cloud IAM : This is an Identity and Access management tool from Google to help administrators run the security and
authorization/authentication policies of an organization.

140. What are the major offerings of Microsoft Azure Cloud?

Microsoft is a relatively new entrant to Cloud computing with Azure cloud offering. Some of the main products of Microsoft cloud are as follows:

I. Azure Container Service : This is a cloud computing service from Microsoft to run and manage Docker based containers.

II. StorSimple : It is a Storage solution from Microsoft for Azure cloud.

III. App Service : By using App Services, users can create Apps for mobile devices as well as websites.

IV. SQL Database : It is a Cloud based SQL database from Microsoft.

V. DocumentDB : This is a NoSQL database in cloud by Microsoft.

VI. Azure Bot Service : We can use Azure Bot Service to create serverless bots that can be scaled up on demand.

VII. Azure IoT Hub : It is a solution for Internet of Things services in cloud by Microsoft.

141. What are the reasons of popularity of Cloud Computing


architecture?

These days Cloud Computing is one of the most favorite architecture among organizations for their systems. Following are some of the reasons for
popularity of Cloud Computing architecture:

I. IoT : With the Internet of Things, there are many types of machines joining the Internet and creating various types of interactions. In
such a scenario, Cloud Computing serves well to provide scalable interfaces to communicate between the machines in IoT.
II. Big Data : Another major trend in today’s computing is Big Data. With Big Data there is very large amount of user / machine data
that is generated. Using in-house solution to handle Big Data is very costly and capital intensive. In Cloud Computing we can
handle Big Data very easily since we do not have to worry about capital costs.

III. Mobile Devices : A large number of users are going to Mobile computing. With a mobile device users can access a service from
any location. To handle wide-variety of mobile devices, standard interfaces of Cloud Computing are very useful.

IV. Viral Content : With growth of Social Media, content and media is getting viral i.e. It takes very short time to increase the traffic
exponentially on a server. In such a scenario Auto-scaling of Cloud Computing architecture can handle such spikes very easily.

142. What are the Machine Learning options from Google Cloud?

Google provides a very rich library of Machine Learning options in Google Cloud. Some of these API are:

I. Google Cloud ML : This is a general purpose Machine Learning API in cloud. We can use pre-trained models or generate new
models for machine learning with this option.

II. Google Cloud Jobs API : It is an API to link Job Seekers with Opportunities. It is mainly for job search based on skills, demand
and location.

III. Google Natural Language API : This API can do text analysis of natural language content. We can use it for analyzing the
content of blogs, websites, books etc.

IV. Google Cloud Speech API : It is a Speech Recognition API from Google to handle spoken text. It can recognize more than 80
languages and their related variants. It can even transcribe the user speech into written text.

V. Google Cloud Translate API : This API can translate content from one language to another language in cloud.

VI. Google Cloud Vision API : It is a powerful API for Image analysis. It can recognize faces and objects in an image. It can even
categorize images in multiple relevant categories with a simple REST API call.

143. How will you optimize the Cloud Computing environment?

In a Cloud Computing environment we pay by usage. In such a scenario our usage costs are much higher. To optimize the Cloud Computing
environment we have to keep a balance between our usage costs and usage.

If we are paying for computing instances we can choose options like Lambda in AWS, which is a much cheaper options for computing in cloud.

In case of Storage, if the data to be stored is not going to be accesses frequently we can go for Glacier option in AWS.

Similarly when we pay for bandwidth usage, it makes sense to implement a caching strategy so that we use less bandwidth for the content that is
accessed very frequently.
It is a challenging task for an architect in cloud to match the options available in cloud with the budget that an organization has to run its
applications.

Optimizations like server-less computing, load balancing, and storage selection can help in keeping the Cloud computing costs low with no
degradation in User experience.

144. Do you think Regulations and Legal Compliance is an important


aspect of Cloud Computing?

Yes, in Cloud Computing we are using resources that are owned by the Cloud provider. Due to this our data resides on the servers that can be
shared by other users of Cloud.

There are regulations and laws for handling user data. We have to ensure that these regulations are met while selecting and implementing a Cloud
computing strategy.

Similarly, if we are in a contract with a client to provide certain Service Level Agreement (SLA) performance, we have to implement the cloud
solution in such a way that there is no breach of SLA agreement due to Cloud provider’s failures.

For security there are laws that have to be followed irrespective of Cloud or Co-located Data center. This is in the interest of our end-customer as
well as for the benefit of business continuity.

With Cloud computing architecture we have to do due diligence in selecting Security and Encryption options in Cloud.

Unix Questions

145. How will you remove all files in current directory? Including
the files that are two levels down in a sub-directory.

In Unix we have rm command to remove files and sub-directories. With rm command we have –r option that stands for recursive. The –r option
can delete all files in a directory recursively.

It means if we our current directory structure is as follows:

My_dir
->Level_1_dir
-> Level_1_dir ->Level_2_dir
-> Level_1_dir ->Level_2_dir->a.txt
With rm –r * command we can delete the file a.txt as well as sub-directories Level_1_dir and Level_2_dir.

Command:
rm – r *

The asterisk (*) is a wild card character that stands for all the files with any name.

146. What is the difference between the –v and –x options in Bash shell
scripts?

In a BASH Unix shell we can specify the options –v and –x on top of a script as follows:

#!/bin/bash -x –v

With –x option BASH shell will echo the commands like for, select, case etc. after substituting the arguments and variables. So it will be an
expanded form of the command that shows all the actions of the script. It is very useful for debugging a shell script.

With –v option BASH shell will echo every command before substituting the values of arguments and variables. In –v option Unix will print each
line as it reads.

In –v option, If we run the script, the shell prints the entire file and then executes. If we run the script interactively, it shows each command after
pressing enter.

147. What is a Filter in Unix command?

In Unix there are many Filter commands like- cat, awk, grep, head, tail cut etc.

A Filter is a software program that takes an input and produces an output, and it can be used in a stream operation.

E.g. cut -d : -f 2 /etc/passwd | grep abc

We can mix and match multiple filters to create a complex command that can solve a problem.
Awk and Sed are complex filters that provide fully programmable features.

Even Data scientists use Unix filters to get the overview of data stored in the files.

148. What is Kernel in Unix operating system?

Kernel is the central core component of a Unix operating system (OS).

A Kernel is the main component that can control everything within Unix OS.

It is the first program that is loaded on startup of Unix OS. Once it is loaded it will manage the rest of the startup process.

Kernel manages memory, scheduling as well as communication with peripherals like printers, keyboards etc.

But Kernel does not directly interact with a user. For a new task, Kernel will spawn a shell and user will work in a shell.

Kernel provides many system calls. A software program interacts with Kernel by using system calls.

Kernel has a protected memory area that cannot be overwritten accidentally by any process.

149. What is a Shell in Unix OS?

Shell in Unix is a user interface that is used by a user to access Unix services.

Generally a Unix Shell is a command line interface (CLI) in which users enter commands by typing or uploading a file.

We use a Shell to run different commands and programs on Unix operating system.

A Shell also has a command interpreter that can take our commands and send these to be executed by Unix operating system.

Some of the popular Shells on Unix are: Korn shell, BASH, C shell etc.
150. What are the different shells in Unix that you know about?

Unix has many flavors of Shell. Some of these are as follows:

Bourne shell: We use sh for Bourne shell.


Bourne Again shell: We use bash to run this shell.
Korn shell: We can use ksh to for Korn shell.
Z shell: The command to use this is zsh
C shell: We use csh to run C shell.
Enhanced C shell: tcsh is the command for enhanced C shell.

151. What is the first character of the output in ls –l command ?

We use ls -l command to list the files and directories in a directory. With -l option we get long listing format.

In this format the first character identifies the entry type. The entry type can be one of the following:

b Block special file


c Character special file
d Directory
l Symbolic link
s Socket link
p FIFO
- Regular file

In general we see d for directory and - for a regular file.

152. What is the difference between Multi-tasking and Multi-user


environment?

In a Multi-tasking environment, same user can submit more than one tasks and operating system will execute them at the same time.

In a Multi-user environment, more than one user can interact with the operating system at the same time.

3. What is Command Substitution in Unix?


Command substitution is a mechanism by which Shell passes the output of a command as an argument to another command. We can even use it to
set a variable or use an argument list in a for loop.

E.g. rm `cat files_to_delete`


In this example files_to_delete is a file containing the list of files to be deleted. cat command outputs this file and gives the output to rm command.
rm command deletes the files.

In general Command Substitution is represented by back quotes `.

153. What is an Inode in Unix?


An Inode is a Data Structure in Unix that denotes a file or a directory on file system. It contains information about file like- location of file on the
disk, access mode, ownership, file type etc.

Each Inode has a number that is used in the index table. Unix kernel uses Inode number to access the contents of an Inode.

We can use ls -i command to get the inode number of a file.

154. What is the difference between absolute path and relative path in
Unix file system?
Absolute path is the complete path of a file or directory from the root directory. In general root directory is represented by / symbol. If we are in a
directory and want to know the absolute path, we can use pwd command.

Relative path is the path relative the current location in directory.

E.g. In a directory structure /var/user/kevin/mail if we are in kevin directory then pwd command will give absolute path as /var/user/kevin.

Absolute path of mail folder is /var/user/kevin/mail. For mail folder ./mail is the relative path of mail directory from kevin folder.

155. What are the main responsibilities of a Unix Shell?


Some of the main responsibilities of a Unix Shell are as follows:

1. Program Execution: A shell is responsible for executing the commands and script files in Unix. User can either interactively enter the commands
in Command Line Interface called terminal or they can run a script file containing a program.
2. Environment Setup: A shell can define the environment for a user. We can set many environment variables in a shell and use the value of these
variables in our program.

3. Interpreter: A shell acts as an interpreter for our scripts. It has a built in programming language that can be used to implement the logic.

4. Pipeline: A shell also can hookup a pipeline of commands. When we run multiple commands separated by | pipe character, the shell takes the
output of a command and passes it to next one in the pipeline.

5. I/O Redirection: Shell is also responsible for taking input from command line interface (CLI) and sending the output back to CLI. We use >, <,
>> characters for this purpose.

156. What is a Shell variable?


A Unix Shell variable is an internal variable that a shell maintains. It is local to that Shell. It is not made available to the parent shell or child shell.

We generally use lower case names for shell variables in C shell.

We can set the value of a shell variable by set command.

E.g. % set max_threads=10

To delete a Shell variable we can use unset command.

To use a Shell variable in a script we use $ sign in front of the variable name.

E.g. echo $max_threads

157. What are the important Shell variables that are initialized on starting
a Shell?
There are following important Shell variables that are automatically initialized when a Shell starts:
user:
term:
home:
path:
These Shell variables take values from environment variables.

If we change the value of these Shell variables then the corresponding environment variable value is also changed.

158. How will you set the value of Environment variables in Unix?
We can use 'setenv' command to set the value of environment variables.
E.g. % setenv [Name] [value]
% setenv MAX_TIME 10

To print the value of environment variable we can use 'printenv' command.


E.g. % printenv MAX_TIME

If we just use printenv then it lists all the environment variables and their values.

To unset or delete an environment variable we use unsetenv command.

E.g. % unsetenv MAX_TIME

To use an environment variable in a command we use the prefix $ with the name of variable.

What is the special rule about Shell and Environment variable in Bourne Shell?

In Bourne Shell, there is not much difference between Shell variable and Environment variable.

Once we start a Bourne Shell, it gets the value of environment variables and defines a corresponding Shell variable. From that time onwards the
shell only refers to Shell variable. But if a change is made to a Shell variable, then we have to explicitly export it to environment so that other shell
or child processes can use it.

Also for Shell variables we use set and unset commands.

159. What is the difference between a System Call and a library function?

System calls are low-level kernel calls. These are handled by the kernel. System calls are implemented in kernel of Unix. An application has to
execute special hardware and system dependent instruction to run a System call.
A library function is also a low level call but it is implemented in user space. A library call is a regular function call whose code resides in a shared
library.

160. What are the networking commands in Unix that you have used?

Some of the popular networking commands in Unix that we use are as follows:

I. ping : We use this command to test the reachability of a host on an Internet Protocol (IP) network.

II. telnet : This is another useful command to access another machine on the network. This is command uses Telnet protocol.

III. tracert : This is short for Traceroute. It is a diagnostic command to display the route and transit delays of packets across Internet
Protocol.

IV. ftp : We use ftp commands to transfer files over the network. ftp uses File Transfer Protocol.

V. su : This unix command is used to execute commands with the privileges of another user. It is also known as switch user, substitute
user.

VI. ssh : This is a secure command that is preferred over Telnet for connecting to another machine. It creates a secure channel over an
unsecured network. It uses cryptographic protocol to make the communication secure.

161. What is a Pipeline in Unix?

A Pipeline in Unix is a chain of commands that are connected through a stream in such a way that output of one command becomes input for
another command.

E.g. ls –l | grep “abc” | wc –l

In the above example we have created pipeline of three commands ls, grep and wc.

First ls –l command is executed and gives the list of files in a directory. Then grep command searches for any line with word “abc” in it. Finally wc
–l command counts the number of lines that are returned by grep command.

In general a Pipeline is uni-directional. The data flows from left to right direction.

162. What is the use of tee command in Unix?


We use tee command in a shell to read the input by user (standard input) and write it to screen (standard output) as well as to a file.

We can use tee command to split the output of a program so that it is visible on command line interface (CLI) as well as stored on a file for later
use.

Syntax is tee [-a] [-i] [file …]

163. How will you count the number of lines and words in a file in Unix?

We can use wc (word count) command for counting the number of lines and words in a file. The wc command provides very good options for
collecting statistics of a file. Some of these options are:

l : This option gives line count


m : This option gives character count
c : This option gives byte count
w : This option gives word count
L: This option gives the length of the longest line

In case we give more than one files as input to wc command then it gives statistics for individual files as well as the total statistics for all files.

164. What is Bash shell?

Bash stands for Bourne Again Shell. It is free software written to replace Bourne shell.

We can see following line in shell scripts for Bash shell.


#!/bin/bash

In Bash we use ~/.profile at login to set environment variables.

In Bash we can execute commands in batch mode or concurrent mode.

In batch mode commands are separated by semi colon.


% command1; command2
In concurrent mode we separate commands by & symbol.
% command1 & command2

165. How will you search for a name in Unix files?

We can use grep command to search for a name or any text in a Unix file.

Grep stands for Globally search a Regular Expression and Print.

Grep command can search for a text in one file as well as multiple files.

We can also specify the text to be searched in regular expression pattern.

% grep ^z *.txt

Above command searches for lines starting with letter z in all the .txt files in current directory.

166. What are the popular options of grep command in Unix?

In Unix, grep is one of the very useful commands. It provides many useful options. Some of the popular options are:

% grep –i : This option ignores case while doing search.

% grep –x : This option is used to search exact word in a file.

% grep –v: We use this option to find the lines that do not have the text we are searching.

% grep –A 10: This option displays 10 lines after the match is found.

% grep –c: We can use it to count the number of matching lines.


167. What is the difference between whoami and who am i commands in
Unix?

Both the commands whoami and who am i are used to get the user information in Unix.

When we login as root user on the network, then both whoami and who am i commands will show the user as root.

But when any other user let say john logs in remotely and runs su –root, whoami will show root, but who am i will show the original user john.

168. What is a Superuser in Unix?

Superuser is a special user account. It is used for Unix system administration. This user can access all files on the file system. Also Superuser can
also run any command on a system.

Generally Superuser permission is given to root user.

Most of the users work on their own user accounts. But when they need to run some additional commands, they can use su to switch to Superuser
account.

It is a best practice to not use Superuser account for regular operations.

169. How will you check the information about a process in Unix?

We can use ps command to check the status of a process in Unix. It is short for Process Status.

On running ps command we get the list of processes that are executing in the Unix environment.

Generally we use ps –ef command. In this e stands for every process and f stands for full format.

This command gives us id of the process. We can use this id to kill the process.

170. What is the use of more command with cat command?


We generally use cat command to display the contents of a file.

If a file is very big then the contents of the file will not fit in screen, therefore screen will scroll forward and in the end we just see the last page of
information from a file.

With more command we can pause the scrolling of data from a file in display. If we use cat command with more then we just see the first page of a
file first. On pressing enter button, more command will keep changing the page. In this way it is easier to view information in a file.

When using the cat command to display file contents, large data that does not fit on the screen would scroll off without pausing, therefore making it
difficult to view. On the other hand, using the more command is more appropriate in such case because it will display file contents one screen page
at a time.

171. What are the File modes in Unix?

In Unix, there are three main permissions for a File.

I. r = It means a user can read the file


II. w = It means that a user can write to this file
III. x = It means the a user can execute a file like a shell script

Further there are three permission sets.

I. Owner: User who created the file


II. Group: This applies to user of a group to which owner belongs
III. Other: This is rest of the users in Unix system

With the combination of these three sets permissions of file in Unix are specified.

E.g. If a file has permissions –rwxr-xr-- , it means that owner has read, write, execute access. Group has read and execute access. Others have
just read access. So the owner or admin has to specifically grant access to Others to execute the file.

172. We wrote a shell script in Unix but it is not doing anything. What
could be the reason?

After writing a shell script we have to give it execute permission so that it can be run in Unix shell.

We can use chmod command to change the permission of a file in Unix. In general we use chmod +x to give execute permission to users for
executing the shell script.
E.g. chmod +x abc.txt will give execute permission to users for executing the file abc.txt.

With chmod command we can also specify to which user/group the permission should be granted. The options are:

173. u is the owner user


174. g is the owner group
175. o is others
176. a is all users

177. What is the significance of 755 in chmod 755 command?

We use chmod command to change the permissions of a file in Unix. In this command we can pass the file permissions in the form of a three-digit
number.

In this number 755, first digit 7 is the permissions given to owner, second digit 5 is the permissions of group and third digit 5 is the permissions of
all others.

Also the numbers 7 and 5 are made from following rules:


4 = read permission
2 = write permission
1 = execute permission

So 7 = 4 + 2 + 1 = Read + Write + Execute permission


5 = 4 + 1 = Read + Execute permission

In out example 755 means, owner has read, write and execute permissions. Group and others have read and execute permissions.

178. How can we run a process in background in Unix? How can we kill a
process running in background?

In Unix shell we can use symbol & to run a command in background.

E.g. % ls –lrt &

Once we use & option it runs the process in background and prints the process ID. We cannot down this process ID for using it in kill command.

We can also use ps –ef command to get the process ID of processes running in background.
Once we know the process ID of a process we can kill it by following command:

% kill -9 processId

179. How will you create a read only file in Unix?

We can create a file with Vi editor, cat or any other command. Once the file is created we have to give read only permissions to file. To change file
permission to read only we use following command:

% chmod 400 filename

180. How does alias work in Unix?

We use alias in Unix to give a short name to a long command. We can even use it to combine multiple commands and give a short convenient
name.

E.g. alias c=’clear’

With this alias we just need to type c for running clear command.

In bash we store alias in .bash_profile file.

To get the list of all active alias in a shell we can run the alias command without any argument on command line.

% alias
alias h='history'
alias ki='kill -9'
alias l='last'

181. How can you redirect I/O in Unix?

In Unix we can redirect the output of command or operation to a file instead of command line interface (CLI). For this we sue redirection pointers.
These are symbols > and >>.

If we want to write the output of ls –lrt command to a file we use following:


% ls –lrt > fileList.txt

If we want to copy one file to another file we use following:


% cat srcFile > copyFile

If we want to append the contents of one file at the end of another file we use following:
% cat srcFile >> appendToFile

182. What are the main steps taken by a Unix Shell for processing a
command?

A Unix Shell takes following main steps to process a command:

I. Parse : First step is to parse the command or set of commands given in a Command Line Interface (CLI). In this step multiple
consecutive spaces are replaced by single space. Multiple commands that are delimited by a symbol are divided into multiple
individual actions.

II. Variable : In next step Shell identifies the variables mentioned in commands. Generally any word prefixed by $ sign is a variable.

III. Command Substitution : In this step, Shell executes the commands that are surrounded by back quotes and replaces that section
with the output from the command.

IV. Wild Card : Once these steps are done, Shell replaces the Wild card characters like asterisk * with the relevant substitution.

V. Execute : Finally, Shell executes all the commands and follows the sequence in which Commands are given in CLI.

183. What is a Sticky bit in Unix?

A Sticky bit is a file/directory permission feature in Unix.

Sometimes when we give write permission to another user then that user can delete the file without the owner knowing about it. To prevent such an
accidental deletion of file we use sticky bit.

When we mark a file/directory with a sticky bit, no user other than owner of file/directory gets the privilege to delete a file/directory.
To set the sticky bit we use following command:

% chmod +t filename

When we do ls for a file or directory, the entries with sticky bit are listed with letter t in the end of permissions.

E.g. % ls –lrt

-rwxrwxrwt 5 abc abc 4096 Jan 1 10:10 abc.txt

To remove the sticky bit we use following command:


% chmod –t filename

184. What are the different outputs from Kill command in Unix?

Kill command in Unix can return following outputs:

I. 0: It means Kill command was successful


II. -1: When we get -1 from Kill command it shows that there was some error. In addition to -1 we get EPERM or ESRCH in output.

EPERM denotes that system does not permit the process to be killed.
ESRCH denotes that process with PID mentioned in Kill command does not exist anymore. Or due to security restrictions we
cannot access that process.

185. How will you customize your environment in Unix?

In Unix, almost all the popular shells provide options to customize the environment by using environment variables. To make these customizations
permanent we can write these to special files that are specific to a user in a shell.

Once we write our customizations to these files, we keep on getting same customization when we open a new shell with same user account.

The special files for storing customization information for different shells at login time are:
I. C shell: /etc/.login or ~/.cshrc
II. TC shell: /etc/.login or ~/.tshrc
III. Korn shell: ~etc/ksh.kshrc
IV. Bash: ~/.bash_profile

186. What are the popular commands for user management in Unix?

In Unix we use following commands for User Management:

I. id : This command gives the active user id with login and groups to which user belongs.

II. who : This command gives the user that is currently logged on system. It also gives the time of login.

III. last : This command shows the previous logins to the system in a chronological order.

IV. adduser : We use this command to add a new user.

V. groupadd : We use this command to add a new group in the system.

VI. usermod : We user usermod command to add/remove a user to a group in Unix.

187. How will you debug a shell script in Unix?

A shell script is a program that can be executed in Unix shell. Sometimes a shell script does not work as intended. To debug and find the problem
in shell script we can use the options provided by shell to debug the script.

In bash shell there are x and v options that can be used while running a script.

% bash –xv <scriptName>

With option v all the input lines are printed by shell. With option x all the simple commands are printed in expanded format. We can see all the
arguments passed to a command with –x option.

188. What is the difference between a Zombie and Orphan process in


Unix?
Zombie is a defunct child process in Unix that still has entry in process table.

Sometimes a child process is terminated in Unix, but the parent process still waits on it.

A Zombie process is different from an Orphan process. An orphan process is a child process whose parent process had died. Once a process is
orphan it is adopted by init process. So effectively it is not an orphan.

Therefore if a process exits without cleaning its child processes, they do not become Zombie. Instead init process adopts these child processes.

Zombie processes are the ones that are not yet adopted by init process.

189. How will you check if a remote host is still alive?

We can use one of the networking commands in Unix. It is called ping. With ping command we can ping a remote host.

Ping utility sends packets in an IP network with ICMP protocol. Once the packet goes from source to destination and comes back it records the
time.

We can even specify the number of packets we want to send so that we collect more statistics to confirm the result.

% ping www.google.com

Another option is to use telnet to remote host to check its status.

190. How will you get the last executed command in Unix?

We can use history command to get the list commands that were executed in Unix. Since we are only interested in the last executed command we
have to use tail to get the last entry.

Exact command would be as follows:


% history | tail -2

191. What is the meaning of “2>&1” in a Unix shell?


In Unix shell file descriptor 1 is for standard output.
File description 2 is for standard error.

We can use “2>&1” in a command so that all the errors from standard error go to standard output.

% cat file 2>&1

192. How will you find which process is taking most CPU time in Unix?

In Unix, we can use top command to list the CPU time and memory used by various processes. The top command lists the process IDs and CPU
time, memory etc used by top most processes.

Top command keeps refreshing the screen at a specified interval. So we can see over the time which process is always appearing on the top most
row in the result of top command.

This is the process that is consuming most CPU time.

193. What is the difference between Soft link and Hard link in Unix?

A soft link is a pointer to a file, directory or a program located in a different location. A hard link can point to a program or a file but not to a
directory.

If we move, delete or rename a file, the soft link will be broken. But a hard link still remains after moving the file/program.

We use the command ln –s for creating a soft link. But a hard link can be created by ln command without –s option.

194. How will you find which processes are using a file?

We can use lsof command to find the list of Process IDs of the processes that are accessing a file in Unix.

Lsof stands for List Open Files.

Sample command is:


% lsof /var
It will list the processes that are accessing /var directory in current unix system.

We can use options –i, -n and –P for different uses.

% lsof –i will only list IP sockets.

195. What is the purpose of nohup in Unix?

In Unix, nohup command can be used to run a command in background. But it is different from & option to run a process in background.

Nohup stands for No Hangup. A nohup process does not stop even if the Unix user that started the process has logged out from the system.

But the process started with option & will stop when the user that started the process logs off.

196. How will you remove blank lines from a file in Unix?

We can use grep command for this option. Grep command gives –v option to exclude lines that do not match a pattern.

In an empty line there is nothing from start to end. In Grep command, ^ denotes that start of line and $ denotes the end of line.

% grep –v ‘^$’ lists the lines that are empty from start to the end.

Once we get this result, we can use > operator to write the output to a new file. So exact command will be:

% grep –v ‘^$’ file1.txt > file2.txt

197. How will you find the remote hosts that are connecting to your
system on a specific port in Unix?

We can use netstat command for this purpose. Netstat command lists the statistics about network connections. We can grep for the port in which
we are interested.
Exact command will be:
% netstst –a | grep “port number”

198. What is xargs in Unix?

We use xargs command to build and execute commands that take input from standard input. It is generally used in chaining of commands.

Xargs breaks the list of arguments into small sub lists that can be handled by a command.

Following is a sample command:

% find /path -type f -print | xargs rm

The above command uses find to get the list of all files in /path directory. Then xargs command passes this list to rm command so that they can be
deleted.

Want to go higher in your career?


Take your career to the next level with these knowledgeable books on the latest technology areas from
KnowledgePowerhouse.

• Top 50 Amazon AWS Interview Questions


• Microservices Interview Questions
• Top 50 Cloud Computing Interview Questions
• Top 50 Apache Hadoop Interview Questions
• Top 50 Apache Hive Interview Questions
• Top 50 Java 8 Latest Interview Questions
• Top 50 Unix Interview Questions
• Top 100 Java Tricky Interview Questions
• Top 50 SQL Tricky Interview Questions

Thanks!!!
TOP 250+ Interviews Questions on AWS

Q1) What is AWS?

Answer:AWS stands for Amazon Web Services. AWS is a platform that provides on-demand
resources for hosting web services, storage, networking, databases and other resources over the
internet with a pay-as-you-go pricing.

Q2) What are the components of AWS?

Answer:EC2 – Elastic Compute Cloud, S3 – Simple Storage Service, Route53, EBS – Elastic Block
Store, Cloudwatch, Key-Paris are few of the components of AWS.

Q3) What are key-pairs?

Answer:Key-pairs are secure login information for your instances/virtual machines. To connect to the
instances we use key-pairs that contain a public-key and private-key.

Q4) What is S3?

Answer:S3 stands for Simple Storage Service. It is a storage service that provides an interface that
you can use to store any amount of data, at any time, from anywhere in the world. With S3 you pay
only for what you use and the payment model is pay-as-you-go.

Q5) What are the pricing models for EC2instances?

Answer:The different pricing model for EC2 instances are as below,

• On-demand
• Reserved
• Spot
• Scheduled
• Dedicated

Q6) What are the types of volumes for EC2 instances?


Answer:

• There are two types of volumes,


• Instance store volumes
• EBS – Elastic Block Stores

Q7) What are EBS volumes?

Answer:EBS stands for Elastic Block Stores. They are persistent volumes that you can attach to the
instances. With EBS volumes, your data will be preserved even when you stop your instances, unlike
your instance store volumes where the data is deleted when you stop the instances.

Q8) What are the types of volumes in EBS?

Answer:Following are the types of volumes in EBS,

• General purpose
• Provisioned IOPS
• Magnetic
• Cold HDD
• Throughput optimized

Q9) What are the different types of instances?

Answer: Following are the types of instances,

• General purpose
• Computer Optimized
• Storage Optimized
• Memory Optimized
• Accelerated Computing

Q10) What is an auto-scaling and what are the components?

Answer: Auto scaling allows you to automatically scale-up and scale-down the number of instances
depending on the CPU utilization or memory utilization. There are 2 components in Auto scaling, they
are Auto-scaling groups and Launch Configuration.

Q11) What are reserved instances?

Answer: Reserved instances are the instance that you can reserve a fixed capacity of EC2 instances. In
reserved instances you will have to get into a contract of 1 year or 3 years.

Q12)What is an AMI?

Answer: AMI stands for Amazon Machine Image. AMI is a template that contains the software
configurations, launch permission and a block device mapping that specifies the volume to attach to
the instance when it is launched.

Q13) What is an EIP?


Answer: EIP stands for Elastic IP address. It is designed for dynamic cloud computing. When you
want to have a static IP address for your instances when you stop and restart your instances, you will
be using EIP address.

Q14) What is Cloudwatch?

Answer: Cloudwatch is a monitoring tool that you can use to monitor your various AWS resources.
Like health check, network, Application, etc.

Q15) What are the types in cloudwatch?

Answer: There are 2 types in cloudwatch. Basic monitoring and detailed monitoring. Basic monitoring
is free and detailed monitoring is chargeable.

Q16) What are the cloudwatch metrics that are available for EC2 instances?

Answer: Diskreads, Diskwrites, CPU utilization, networkpacketsIn, networkpacketsOut, networkIn,


networkOut, CPUCreditUsage, CPUCreditBalance.

Q17) What is the minimum and maximum size of individual objects that you can store in S3
Answer: The minimum size of individual objects that you can store in S3 is 0 bytes and the maximum
bytes that you can store for individual objects is 5TB.

Q18) What are the different storage classes in S3?

Answer: Following are the types of storage classes in S3,

• Standard frequently accessed


• Standard infrequently accessed • One-zone infrequently accessed.
• Glacier
• RRS – reduced redundancy storage

Q19) What is the default storage class in S3?

Answer: The default storage class in S3 in Standard frequently accessed.

Q20) What is glacier?

Answer: Glacier is the back up or archival tool that you use to back up your data in S3.

Q21) How can you secure the access to your S3 bucket?

Answer: There are two ways that you can control the access to your S3 buckets,

• ACL – Access Control List


• Bucket polices

Q22) How can you encrypt data in S3?

Answer: You can encrypt the data by using the below methods,

• Server Side Encryption – S3 (AES 256 encryption)


• Server Side Encryption – KMS (Key management Service)
• Server Side Encryption – C (Client Side)

Q23) What are the parameters for S3 pricing?

Answer: The pricing model for S3 is as below,

• Storage used
• Number of requests you make
• Storage management
• Data transfer
• Transfer acceleration

Q24) What is the pre-requisite to work with Cross region replication in S3?

Answer: You need to enable versioning on both source bucket and destination to work with cross
region replication. Also both the source and destination bucket should be in different region.

Q25) What are roles?

Answer: Roles are used to provide permissions to entities that you trust within your AWS account.
Roles are users in another account. Roles are similar to users but with roles you do not need to create
any username and password to work with the resources.

Q26) What are policies and what are the types of policies?

Answer: Policies are permissions that you can attach to the users that you create. These policies will
contain that access that you have provided to the users that you have created. There are 2 types of
policies.

• Managed policies
• Inline policies

Q27) What is cloudfront?

Answer: Cloudfront is an AWS web service that provided businesses and application developers an
easy and efficient way to distribute their content with low latency and high data transfer speeds.
Cloudfront is content delivery network of AWS.

Q28) What are edge locations?

Answer: Edge location is the place where the contents will be cached. When a user tries to access
some content, the content will be searched in the edge location. If it is not available then the content
will be made available from the origin location and a copy will be stored in the edge location.

Q29) What is the maximum individual archive that you can store in glacier?

Answer: You can store a maximum individual archive of upto 40 TB.

Q30) What is VPC?

Answer: VPC stands for Virtual Private Cloud. VPC allows you to easily customize your networking
configuration. VPC is a network that is logically isolated from other network in the cloud. It allows
you to have your own IP address range, subnets, internet gateways, NAT gateways and security
groups.

Q31) What is VPC peering connection?

Answer: VPC peering connection allows you to connect 1 VPC with another VPC. Instances in these
VPC behave as if they are in the same network.

Q32) What are NAT gateways?

Answer: NAT stands for Network Address Translation. NAT gateways enables instances in a private
subnet to connect to the internet but prevent the internet from initiating a connection with those
instances.

Q33) How can you control the security to your VPC?

Answer: You can use security groups and NACL (Network Access Control List) to control the
security to your

VPC.
Q34) What are the different types of storage gateway?

Answer: Following are the types of storage gateway.

• File gateway
• Volume gateway
• Tape gateway

Q35) What is a snowball?

Answer: Snowball is a data transport solution that used source appliances to transfer large amounts of
data into and out of AWS. Using snowball, you can move huge amount of data from one place to
another which reduces your network costs, long transfer times and also provides better security.

Q36) What are the database types in RDS?

Answer: Following are the types of databases in RDS,

• Aurora
• Oracle
• MYSQL server
• Postgresql
• MariaDB
• SQL server

Q37) What is a redshift?

Answer: Amazon redshift is a data warehouse product. It is a fast and powerful, fully managed,
petabyte scale data warehouse service in the cloud.

Q38) What is SNS?


Answer: SNS stands for Simple Notification Service. SNS is a web service that makes it easy to
notifications from the cloud. You can set up SNS to receive email notification or message notification.

Q39) What are the types of routing polices in route53?

Answer: Following are the types of routing policies in route53,

• Simple routing
• Latency routing
• Failover routing
• Geolocation routing
• Weighted routing
• Multivalue answer

Q40) What is the maximum size of messages in SQS?

Answer: The maximum size of messages in SQS is 256 KB.

Q41) What are the types of queues in SQS?

Answer: There are 2 types of queues in SQS.


• Standard queue
• FIFO (First In First Out)

Q42) What is multi-AZ RDS?

Answer: Multi-AZ (Availability Zone) RDS allows you to have a replica of your production database
in another availability zone. Multi-AZ (Availability Zone) database is used for disaster recovery. You
will have an exact copy of your database. So when your primary database goes down, your application
will automatically failover to the standby database.

Q43) What are the types of backups in RDS database?

Answer: There are 2 types of backups in RDS database.

• Automated backups
• Manual backups which are known as snapshots.

Q44) What is the difference between security groups and network access control list?

Answer:
Security Groups Network access control list
Can control the access at the instance level Can control access at the subnet level
Can add rules for “allow” only Can add rules for both “allow” and “deny”
Rules are processed in order number when
Evaluates all rules before allowing the traffic
allowing traffic.
Can assign unlimited number of security groups Can assign upto 5 security groups.
Statefull filtering Stateless filtering
Q45) What are the types of load balancers in EC2?

Answer: There are 3 types of load balancers,


• Application load balancer
• Network load balancer
• Classic load balancer

Q46) What is and ELB?

Answer: ELB stands for Elastic Load balancing. ELB automatically distributes the incoming
application traffic or network traffic across multiple targets like EC2, containers, IP addresses.

Q47) What are the two types of access that you can provide when you are creating users?

Answer: Following are the two types of access that you can create.

• Programmatic access
• Console access

Q48) What are the benefits of auto scaling?

Answer: Following are the benefits of auto scaling

• Better fault tolerance


• Better availability
• Better cost management

Q49) What are security groups?

Answer: Security groups acts as a firewall that contains the traffic for one or more instances. You can
associate one or more security groups to your instances when you launch then. You can add rules to
each security group that allow traffic to and from its associated instances. You can modify the rules of
a security group at any time, the new rules are automatically and immediately applied to all the
instances that are associated with the security group

Q50) What are shared AMI’s?

Answer: Shared AMI’s are the AMI that are created by other developed and made available for other
developed to use.

Q51)What is the difference between the classic load balancer and application load balancer?

Answer: Dynamic port mapping, multiple port multiple listeners is used in Application Load
Balancer, One port one listener is achieved via Classic Load Balancer

Q52) By default how many Ip address does aws reserve in a subnet?

Answer: 5

Q53) What is meant by subnet?

Answer: A large section of IP Address divided in to chunks are known as subnets

Q54) How can you convert a public subnet to private subnet?

Answer: Remove IGW & add NAT Gateway, Associate subnet in Private route table
Q55) Is it possible to reduce a ebs volume?

Answer: no it’s not possible, we can increase it but not reduce them

Q56) What is the use of elastic ip are they charged by AWS?

Answer: These are ipv4 address which are used to connect the instance from internet, they are charged
if the instances are not attached to it

Q57) One of my s3 is bucket is deleted but i need to restore is there any possible way?

Answer: If versioning is enabled we can easily restore them

Q58) When I try to launch an ec2 instance i am getting Service limit exceed, how to fix the
issue?

Answer: By default AWS offer service limit of 20 running instances per region, to fix the issue we
need to contact AWS support to increase the limit based on the requirement

Q59) I need to modify the ebs volumes in Linux and windows is it possible

Answer: yes its possible from console use modify volumes in section give the size u need then for
windows go to disk management for Linux mount it to achieve the modification

Q60) Is it possible to stop a RDS instance, how can I do that?

Answer: Yes it’s possible to stop rds. Instance which are non-production and non multi AZ’s

Q61) What is meant by parameter groups in rds. And what is the use of it?

Answer: Since RDS is a managed service AWS offers a wide set of parameter in RDS as parameter
group which is modified as per requirement

Q62) What is the use of tags and how they are useful?

Answer: Tags are used for identification and grouping AWS Resources

Q63) I am viewing an AWS Console but unable to launch the instance, I receive an IAM Error
how can I rectify it?

Answer: As AWS user I don’t have access to use it, I need to have permissions to use it further

Q64) I don’t want my AWS Account id to be exposed to users how can I avoid it?

Answer: In IAM console there is option as sign in url where I can rename my own account name with
AWS account

Q65) By default how many Elastic Ip address does AWS Offer?

Answer: 5 elastic ip per region

Q66) You are enabled sticky session with ELB. What does it do with your instance?
Answer: Binds the user session with a specific instance

Q67) Which type of load balancer makes routing decisions at either the transport layer or the

Application layer and supports either EC2 or VPC.

Answer: Classic Load Balancer

Q68) Which is virtual network interface that you can attach to an instance in a VPC?

Answer: Elastic Network Interface

Q69) You have launched a Linux instance in AWS EC2. While configuring security group, you

Have selected SSH, HTTP, HTTPS protocol. Why do we need to select SSH?

Answer: To verify that there is a rule that allows traffic from EC2 Instance to your computer

Q70) You have chosen a windows instance with Classic and you want to make some change to
the

Security group. How will these changes be effective?

Answer: Changes are automatically applied to windows instances

Q71) Load Balancer and DNS service comes under which type of cloud service?

Answer: IAAS-Storage

Q72) You have an EC2 instance that has an unencrypted volume. You want to create another

Encrypted volume from this unencrypted volume. Which of the following steps can achieve
this?

Answer: Create a snapshot of the unencrypted volume (applying encryption parameters), copy the.
Snapshot and create a volume from the copied snapshot

Q73) Where does the user specify the maximum number of instances with the auto scaling
Commands?

Answer: Auto scaling Launch Config

Q74) Which are the types of AMI provided by AWS?

Answer: Instance Store backed, EBS Backed

Q75) After configuring ELB, you need to ensure that the user requests are always attached to a
Single instance. What setting can you use?

Answer: Sticky session

Q76) When do I prefer to Provisioned IOPS over the Standard RDS storage?
Answer:If you have do batch-oriented is workloads.

Q77) If I am running on my DB Instance a Multi-AZ deployments, can I use to the stand by the
DB Instance for read or write a operation along with to primary DB instance?

Answer: Primary db instance does not working.

Q78) Which the AWS services will you use to the collect and the process e-commerce data for
the near by real-time analysis?

Answer: Good of Amazon DynamoDB.

Q79) A company is deploying the new two-tier an web application in AWS. The company has to
limited on staff and the requires high availability, and the application requires to complex queries
and table joins. Which configuration provides to the solution for company’s requirements?

Answer: An web application provide on Amazon DynamoDB solution.

Q80) Which the statement use to cases are suitable for Amazon DynamoDB?

Answer:The storing metadata for the Amazon S3 objects& The Running of relational joins and
complex an updates.

Q81) Your application has to the retrieve on data from your user’s mobile take every 5 minutes
and then data is stored in the DynamoDB, later every day at the particular time the data is an
extracted into S3 on a per user basis and then your application is later on used to visualize the
data to user. You are the asked to the optimize the architecture of the backend system can to
lower cost, what would you recommend do?

Answer: Introduce Amazon Elasticache to the cache reads from the Amazon DynamoDB table and to
reduce the provisioned read throughput.

Q82) You are running to website on EC2 instances can deployed across multiple Availability
Zones with an Multi-AZ RDS MySQL Extra Large DB Instance etc. Then site performs a high
number of the small reads and the write per second and the relies on the eventual consistency
model. After the comprehensive tests you discover to that there is read contention on RDS
MySQL. Which is the best approaches to the meet these requirements?

Answer:The Deploy Elasti Cache in-memory cache is running in each availability zone and Then
Increase the RDS MySQL Instance size and the Implement provisioned IOPS.

Q83) An startup is running to a pilot deployment of around 100 sensors to the measure street
noise and The air quality is urban areas for the 3 months. It was noted that every month to
around the 4GB of sensor data are generated. The company uses to a load balanced take auto
scaled layer of the EC2 instances and a RDS database with a 500 GB standard storage. The pilot
was success and now they want to the deploy take atleast 100K sensors.let which to need the
supported by backend. You need to the stored data for at least 2 years to an analyze it. Which
setup of following would you be prefer?

Answer: The Replace the RDS instance with an 6 node Redshift cluster with take 96TB of storage.

Q84) Let to Suppose you have an application where do you have to render images and also do
some of general computing. which service will be best fit your need?
Answer:Used on Application Load Balancer.

Q85) How will change the instance give type for the instances, which are the running in your
applications tier and Then using Auto Scaling. Where will you change it from areas?

Answer: Changed to Auto Scaling launch configuration areas.

Q86) You have an content management system running on the Amazon EC2 instance that is the
approaching 100% CPU of utilization. Which option will be reduce load on the Amazon EC2
instance?

Answer: Let Create a load balancer, and Give register the Amazon EC2 instance with it.
Q87) What does the Connection of draining do?

Answer: The re-routes traffic from the instances which are to be updated (or) failed an health to check.

Q88) When the instance is an unhealthy, it is do terminated and replaced with an new ones,
which of the services does that?

Answer: The survice make a fault tolerance.

Q89) What are the life cycle to hooks used for the AutoScaling?

Answer: They are used to the put an additional taken wait time to the scale in or scale out events.

Q90) An user has to setup an Auto Scaling group. Due to some issue the group has to failed for
launch a single instance for the more than 24 hours. What will be happen to the Auto Scaling in
the condition?

Answer: The auto Scaling will be suspend to the scaling process.

Q91) You have an the EC2 Security Group with a several running to EC2 instances. You
changed to the Security of Group rules to allow the inbound traffic on a new port and protocol,
and then the launched a several new instances in the same of Security Group.Such the new rules
apply?

Answer:The Immediately to all the instances in security groups.

Q92) To create an mirror make a image of your environment in another region for the disaster
recoverys, which of the following AWS is resources do not need to be recreated in second
region?

Answer: May be the selected on Route 53 Record Sets.

Q93) An customers wants to the captures all client connections to get information from his load
balancers at an interval of 5 minutes only, which cal select option should he choose for his
application?

Answer: The condition should be Enable to AWS CloudTrail for the loadbalancers.

Q94) Which of the services to you would not use to deploy an app?
Answer: Lambda app not used on deploy.

Q95) How do the Elastic Beanstalk can apply to updates?

Answer: By a duplicate ready with a updates prepare before swapping.

Q96) An created a key in the oregon region to encrypt of my data in North Virginia region for
security purposes. I added to two users to the key and the external AWS accounts. I wanted to
encrypt an the object in S3, so when I was tried, then key that I just created is not listed.What
could be reason&solution?

Answer:The Key should be working in the same region.


Q97) As a company needs to monitor a read and write IOPS for the AWS MySQL RDS
instances and then send real-time alerts to the operations of team. Which AWS services to can
accomplish this?

Answer:The monitoring on Amazon CloudWatch

Q98) The organization that is currently using the consolidated billing has to recently acquired to
another company that already has a number of the AWS accounts. How could an Administrator to
ensure that all the AWS accounts, from the both existing company and then acquired company, is
billed to the single account?

Answer: All Invites take acquired the company’s AWS account to join existing the company’s of
organization by using AWS Organizations.

Q99) The user has created an the applications, which will be hosted on the EC2. The application
makes calls to the Dynamo DB to fetch on certain data. The application using the DynamoDB
SDK to connect with the EC2 instance. Which of respect to best practice for the security in this
scenario?

Answer: The user should be attach an IAM roles with the DynamoDB access to EC2 instance.

Q100) You have an application are running on EC2 Instance, which will allow users to
download the files from a private S3 bucket using the pre-assigned URL. Before generating to
URL the Q101) application should be verify the existence of file in S3. How do the application
use the AWS credentials to access S3 bucket securely?

Answer:An Create an IAM role for the EC2 that allows list access to objects in S3 buckets. Launch
to instance with this role, and retrieve an role’s credentials from EC2 Instance make metadata.

Q101) You use the Amazon CloudWatch as your primary monitoring system for
web application. After a recent to software deployment, your users are to getting
Intermittent the 500 Internal Server to the Errors, when you using web application. You want
to create the CloudWatch alarm, and notify the on-call engineer let when these occur. How can
you accomplish the using the AWS services?

Answer: An Create a CloudWatch get Logs to group and A define metric filters that assure capture
500 Internal Servers should be Errors. Set a CloudWatch alarm on the metric and By Use of
Amazon Simple to create a Notification Service to notify an the on-call engineers when prepare
CloudWatch alarm is triggered.
Q102) You are designing a multi-platform of web application for the AWS. The application will
run on the EC2 instances and Till will be accessed from PCs, tablets and smart phones.Then
Supported accessing a platforms are Windows, MACOS, IOS and Android. They Separate
sticky sessions and SSL certificate took setups are required for the different platform types.
Which do describes the most cost effective and Like performance efficient the architecture
setup?

Answer:Assign to multiple ELBs an EC2 instance or group of EC2 take instances running to common
component of the web application, one ELB change for each platform type.Take Session will be
stickiness and SSL termination are done for the ELBs.

Q103) You are migrating to legacy client-server application for AWS. The application responds
to a specific DNS visible domain (e.g. www.example.com) and server 2-tier architecture, with
multiple application for the servers and the database server. Remote clients use to TCP to connect
to the application of servers. The application servers need to know the IP address of clients in
order to the function of properly and are currently taking of that information from TCP socket.
A Multi-AZ RDS MySQL instance to will be used for database. During the migration you change
the application code but you have file a change request. How do would you implement the
architecture on the AWS in order to maximize scalability and high availability?

Answer: File a change request to get implement of Proxy Protocol support in the application. Use of
ELB with TCP Listener and A Proxy Protocol enabled to distribute the load on two application
servers in the different AZs.

Q104) Your application currently is leverages AWS Auto Scaling to the grow and shrink as a
load Increases/decreases and has been performing as well. Your marketing a team expects and
steady ramp up in traffic to follow an upcoming campaign that will result in 20x growth in the
traffic over 4 weeks. Your forecast for approximate number of the Amazon EC2 instances
necessary to meet peak demand is 175. What should be you do avoid potential service
disruptions during the ramp up traffic?

Answer: Check the service limits in the Trusted Advisors and adjust as necessary, so that forecasted
count remains within the limits.

Q105) You have a web application running on the six Amazon EC2 instances, consuming about
45% of resources on the each instance. You are using the auto-scaling to make sure that a six
instances are running at all times. The number of requests this application processes to
consistent and does not experience to spikes. Then application are critical to your business and
you want to high availability for at all times. You want to the load be distributed evenly has
between all instances. You also want to between use same Amazon Machine Image (AMI) for all
instances. Which are architectural choices should you make?

Answer: Deploy to 3 EC2 instances in one of availability zone and 3 in another availability of zones
and to use of Amazon Elastic is Load Balancer.

Q106) You are the designing an application that a contains protected health information.
Security and Then compliance requirements for your application mandate that all protected to
health information in application use to encryption at rest and in the transit module. The
application to uses an three-tier architecture. where should data flows through the load
balancers and is stored on the Amazon EBS volumes for the processing, and the results are
stored in the Amazon S3 using a AWS SDK. Which of the options satisfy the security
requirements?
Answer: Use TCP load balancing on load balancer system, SSL termination on Amazon to create EC2
instances, OS-level disk take encryption on Amazon EBS volumes, and The amazon S3 with
serverside to encryption and Use the SSL termination on load balancers, an SSL listener on the
Amazon to create EC2 instances, Amazon EBS encryption on the EBS volumes containing the PHI,
and Amazon S3 with a server-side of encryption.

Q107) An startup deploys its create photo-sharing site in a VPC. An elastic load balancer
distributes to web traffic across two the subnets. Then the load balancer session to stickiness is
configured to use of AWS-generated session cookie, with a session TTL of the 5 minutes. The
web server to change Auto Scaling group is configured as like min-size=4, max-size=4. The
startup is the preparing for a public launchs, by running the load-testing software installed on
the single Amazon Elastic Compute Cloud (EC2) instance to running in us-west-2a. After 60
minutes of load-testing, the web server logs of show the following:WEBSERVER LOGS | # of
HTTP requests to from load-tester system | # of HTTP requests to from private on beta users ||
webserver #1 (subnet an us-west-2a): | 19,210 | 434 | webserver #2 (subnet an us-west-2a): |
21,790 | 490 || webserver #3 (subnet an us-west-2b): | 0 | 410 || webserver #4 (subnet an us-
west2b): | 0 | 428 |Which as recommendations can be help of ensure that load-testing HTTP
requests are will evenly distributed across to four web servers?

Answer:Result of cloud is re-configure the load-testing software to the re-resolve DNS for each web
request.

Q108) To serve the Web traffic for a popular product to your chief financial officer and IT
director have purchased 10 m1.large heavy utilization of Reserved Instances (RIs) evenly put
spread across two availability zones: Route 53 are used to deliver the traffic to on Elastic Load
Balancer (ELB). After the several months, the product grows to even more popular and you
need to additional capacity As a result, your company that purchases two c3.2xlarge medium
utilization RIs You take register the two c3.2xlarge instances on with your ELB and quickly find
that the ml of large instances at 100% of capacity and the c3.2xlarge instances have significant
to capacity that’s can unused Which option is the most of cost effective and uses EC2 capacity
most of effectively?

Answer: To use a separate ELB for the each instance type and the distribute load to ELBs with a
Route 53 weighted round of robin.

Q109) An AWS customer are deploying an web application that is the composed of a front-end
running on the Amazon EC2 and confidential data that are stored on the Amazon S3. The
customer security policy is that all accessing operations to this sensitive data must authenticated
and authorized by centralized access to management system that is operated by separate
security team. In addition, the web application team that be owns and administers the EC2 web
front-end instances are prohibited from having the any ability to access data that circumvents
this centralized access to management system. Which are configurations will support these
requirements?

Answer:The configure to the web application get authenticate end-users against the centralized access
on the management system. Have a web application provision trusted to users STS tokens an entitling
the download of the approved data directly from a Amazon S3.

Q110) A Enterprise customer is starting on their migration to the cloud, their main reason for
the migrating is agility and they want to the make their internal Microsoft active directory
available to the many applications running on AWS, this is so internal users for only have to
remember one set of the credentials and as a central point of user take control for the leavers
and joiners. How could they make their actions the directory secures and the highly available
with minimal on-premises on infrastructure changes in the most cost and the timeefficient
way?

Answer: By Using a VPC, they could be create an the extension to their data center and to make use
of resilient hardware IPSEC on tunnels, they could then have two domain consider to controller
instances that are joined to the existing domain and reside within the different subnets in the different
availability zones.

Q111) What is Cloud Computing?

Answer:Cloud computing means it provides services to access programs, application, storage,


network, server over the internet through browser or client side application on your PC, Laptop,
Mobile by the end user without installing, updating and maintaining them.
Q112) Why we go for Cloud Computing?

Answer:

• Lower computing cost


• Improved Performance
• No IT Maintenance
• Business connectivity
• Easily upgraded
• Device Independent

Q113) What are the deployment models using in Cloud?

Answer:

• Private Cloud
• Public Cloud
• Hybrid cloud
• Community cloud 4

Q114) Explain Cloud Service Models?

Answer: SAAS (Software as a Service): It is software distribution model in which application are
hosted by a vendor over the internet for the end user freeing from complex software and hardware
management. (Ex: Google drive, drop box)

PAAS (Platform as a Service): It provides platform and environment to allow developers to build
applications. It frees developers without going into the complexity of building and maintaining the
infrastructure. (Ex: AWS Elastic Beanstalk, Windows Azure)

IAAS (Infrastructure as a Service): It provides virtualized computing resources over the internet like
cpu, memory, switches, routers, firewall, Dns, Load balancer (Ex: Azure, AWS)

Q115) What are the advantage of Cloud Computing?

Answer:

• Pay per use


• Scalability
• Elasticity
• High Availability
• Increase speed and Agility
• Go global in Minutes

Q116) What is AWS?

Answer: Amazon web service is a secure cloud services platform offering compute, power, database,
storage, content delivery and other functionality to help business scale and grow.

AWS is fully on-demand

AWS is Flexibility, availability and Scalability


AWS is Elasticity: scale up and scale down as needed.

Q117) What is mean by Region, Availability Zone and Edge Location?

Answer: Region: An independent collection of AWS resources in a defined geography. A collection


of Data centers (Availability zones). All availability zones in a region connected by high bandwidth.

Availability Zones: An Availability zone is a simply a data center. Designed as independent failure
zone. High speed connectivity, Low latency.

Edge Locations: Edge location are the important part of AWS Infrastructure. Edge locations are CDN
endpoints for cloud front to deliver content to end user with low latency

Q118) How to access AWS Platform?

Answer:

• AWS Console
• AWS CLI (Command line interface)
• AWS SDK (Software Development Kit)

Q119) What is EC2? What are the benefits in EC2?

Amazon Elastic compute cloud is a web service that provides resizable compute capacity in the
cloud.AWS EC2 provides scalable computing capacity in the AWS Cloud. These are the virtual
servers also called as an instances. We can use the instances pay per use basis.

Benefits:

• Easier and Faster


• Elastic and Scalable
• High Availability
• Cost-Effective

Q120) What are the pricing models available in AWS EC2?

Answer:

• On-Demand Instances
• Reserved Instances
• Spot Instances
• Dedicated Host

Q121) What are the types using in AWS EC2?

Answer:

• General Purpose
• Compute Optimized
• Memory optimized
• Storage Optimized
• Accelerated Computing (GPU Based)
Q122) What is AMI? What are the types in AMI?

Answer:

Amazon machine image is a special type of virtual appliance that is used to create a virtual machine
within the amazon Elastic compute cloud. AMI defines the initial software that will be in an instance
when it is launched.

Types of AMI:

• Published by AWS
• AWS Marketplace
• Generated from existing instances
• Uploaded virtual server

Q123) How to Addressing AWS EC2 instances?

Answer:

• Public Domain name system (DNS) name: When you launch an instance AWS creates a DNS
name that can be used to access the
• Public IP: A launched instance may also have a public ip address This IP address assigned
from the address reserved by AWS and cannot be specified.
• Elastic IP: An Elastic IP Address is an address unique on the internet that you reserve
independently and associate with Amazon EC2 instance. This IP Address persists until the
customer release it and is not tried to

Q124) What is Security Group?

Answer: AWS allows you to control traffic in and out of your instance through virtual firewall called
Security groups. Security groups allow you to control traffic based on port, protocol and
source/Destination.

Q125) When your instance show retired state?

Answer:Retired state only available in Reserved instances. Once the reserved instance reserving time
(1 yr/3 yr) ends it shows Retired state.
Q126) Scenario: My EC2 instance IP address change automatically while instance stop and
start. What is the reason for that and explain solution?

Answer:AWS assigned Public IP automatically but it’s change dynamically while stop and start. In
that case we need to assign Elastic IP for that instance, once assigned it doesn’t change automatically.

Q127) What is Elastic Beanstalk?

Answer:AWS Elastic Beanstalk is the fastest and simplest way to get an application up and running on
AWS.Developers can simply upload their code and the service automatically handle all the details
such as resource provisioning, load balancing, Auto scaling and Monitoring.

Q128) What is Amazon Lightsail?

Answer:Lightsail designed to be the easiest way to launch and manage a virtual private server with
AWS.Lightsail plans include everything you need to jumpstart your project a virtual machine, ssd
based storage, data transfer, DNS Management and a static ip.

Q129) What is EBS?

Answer:Amazon EBS Provides persistent block level storage volumes for use with Amazon EC2
instances. Amazon EBS volume is automatically replicated with its availability zone to protect
component failure offering high availability and durability. Amazon EBS volumes are available in a
variety of types that differ in performance characteristics and Price.

Q130) How to compare EBS Volumes?

Answer: Magnetic Volume: Magnetic volumes have the lowest performance characteristics of all
Amazon EBS volume types.

EBS Volume size: 1 GB to 1 TB Average IOPS: 100 IOPS Maximum throughput: 40-90 MB

General-Purpose SSD: General purpose SSD volumes offers cost-effective storage that is ideal for a
broad range of workloads. General purpose SSD volumes are billed based on the amount of data space
provisioned regardless of how much of data you actually store on the volume.

EBS Volume size: 1 GB to 16 TB Maximum IOPS: upto 10000 IOPS Maximum throughput: 160 MB

Provisioned IOPS SSD: Provisioned IOPS SSD volumes are designed to meet the needs of I/O
intensive workloads, particularly database workloads that are sensitive to storage performance and
consistency in random access I/O throughput. Provisioned IOPS SSD Volumes provide predictable,
High performance.

EBS Volume size: 4 GB to 16 TB Maximum IOPS: upto 20000 IOPS Maximum throughput: 320 MB

Q131) What is cold HDD and Throughput-optimized HDD?

Answer: Cold HDD: Cold HDD volumes are designed for less frequently accessed workloads. These
volumes are significantly less expensive than throughput-optimized HDD volumes.

EBS Volume size: 500 GB to 16 TB Maximum IOPS: 200 IOPS Maximum throughput: 250 MB
Throughput-Optimized HDD: Throughput-optimized HDD volumes are low cost HDD volumes
designed for frequent access, throughput-intensive workloads such as big data, data warehouse.
EBS Volume size: 500 GB to 16 TB Maximum IOPS: 500 IOPS Maximum throughput: 500 MB
Q132) What is Amazon EBS-Optimized instances?

Answer: Amazon EBS optimized instances to ensure that the Amazon EC2 instance is prepared to
take advantage of the I/O of the Amazon EBS Volume. An amazon EBS-optimized instance uses an
optimized configuration stack and provide additional dedicated capacity for Amazon EBS I/When you
select Amazon EBS-optimized for an instance you pay an additional hourly charge for that instance.

Q133) What is EBS Snapshot?

Answer:
• It can back up the data on the EBS Volume. Snapshots are incremental backups.
• If this is your first snapshot it may take some time to create. Snapshots are point in time
copies of volumes.

Q134) How to connect EBS volume to multiple instance?

Answer: We can’t able to connect EBS volume to multiple instance, but we can able to connect
multiple EBS Volume to single instance.

Q135) What are the virtualization types available in AWS?

Answer: Hardware assisted Virtualization: HVM instances are presented with a fully virtualized set of
hardware and they executing boot by executing master boot record of the root block device of the
image. It is default Virtualization.

Para virtualization: This AMI boot with a special boot loader called PV-GRUB. The ability of the guest
kernel to communicate directly with the hypervisor results in greater performance levels than other
virtualization approaches but they cannot take advantage of hardware extensions such as networking,
GPU etc. Its customized Virtualization image. Virtualization image can be used only for particular
service.

Q136) Differentiate Block storage and File storage?

Answer:

Block Storage: Block storage operates at lower level, raw storage device level and manages data as a
set of numbered, fixed size blocks.

File Storage: File storage operates at a higher level, the operating system level and manage data as a
named hierarchy of files and folders.

Q137) What are the advantage and disadvantage of EFS? Advantages:

Answer:

• Fully managed service


• File system grows and shrinks automatically to petabytes
• Can support thousands of concurrent connections
• Multi AZ replication
• Throughput scales automatically to ensure consistent low latency Disadvantages:
• Not available in all region
• Cross region capability not available
• More complicated to provision compared to S3 and EBS

Q138) what are the things we need to remember while creating s3 bucket?

Answer:

• Amazon S3 and Bucket names are


• This means bucket names must be unique across all AWS
• Bucket names can contain upto 63 lowercase letters, numbers, hyphens and
• You can create and use multiple buckets
• You can have upto 100 per account by
Q139) What are the storage class available in Amazon s3?

Answer:

• Amazon S3 Standard
• Amazon S3 Standard-Infrequent Access
• Amazon S3 Reduced Redundancy Storage
• Amazon Glacier

Q140) Explain Amazon s3 lifecycle rules?

Answer: Amazon S3 lifecycle configuration rules, you can significantly reduce your storage costs by
automatically transitioning data from one storage class to another or even automatically delete data
after a period of time.

• Store backup data initially in Amazon S3 Standard


• After 30 days, transition to Amazon Standard IA
• After 90 days, transition to Amazon Glacier
• After 3 years, delete

Q141) What is the relation between Amazon S3 and AWS KMS?

Answer: To encrypt Amazon S3 data at rest, you can use several variations of Server-Side Encryption.
Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypt
it for you when you access it’ll SSE performed by Amazon S3 and AWS Key Management Service
(AWS KMS) uses the 256-bit Advanced Encryption Standard (AES).

Q142) What is the function of cross region replication in Amazon S3?

Answer: Cross region replication is a feature allows you asynchronously replicate all new objects in
the source bucket in one AWS region to a target bucket in another region. To enable cross-region
replication, versioning must be turned on for both source and destination buckets. Cross region
replication is commonly used to reduce the latency required to access objects in Amazon S3

Q143) How to create Encrypted EBS volume?


Answer: You need to select Encrypt this volume option in Volume creation page. While creation a
new master key will be created unless you select a master key that you created separately in the
service. Amazon uses the AWS key management service (KMS) to handle key management.

Q144) Explain stateful and Stateless firewall.

Answer:

Stateful Firewall: A Security group is a virtual stateful firewall that controls inbound and outbound
network traffic to AWS resources and Amazon EC2 instances. Operates at the instance level. It
supports allow rules only. Return traffic is automatically allowed, regardless of any rules.

Stateless Firewall: A Network access control List (ACL) is a virtual stateless firewall on a subnet
level. Supports allow rules and deny rules. Return traffic must be explicitly allowed by rules.

Q145) What is NAT Instance and NAT Gateway?

Answer:

NAT instance: A network address translation (NAT) instance is an Amazon Linux machine Image
(AMI) that is designed to accept traffic from instances within a private subnet, translate the source IP
address to the Public IP address of the NAT instance and forward the traffic to IWG.

NAT Gateway: A NAT gateway is an Amazon managed resources that is designed to operate just like
a NAT instance but it is simpler to manage and highly available within an availability Zone. To allow
instance within a private subnet to access internet resources through the IGW via a NAT gateway.

Q146) What is VPC Peering?

Answer: Amazon VPC peering connection is a networking connection between two amazon vpc’s that
enables instances in either Amazon VPC to communicate with each other as if they are within the
same network. You can create amazon VPC peering connection between your own Amazon VPC’s or
Amazon VPC in another AWS account within a single region.

Q147) What is MFA in AWS?

Answer: Multi factor Authentication can add an extra layer of security to your infrastructure by
adding a second method of authentication beyond just password or access key.

Q148) What are the Authentication in AWS?

Answer:

• User Name/Password
• Access Key
• Access Key/ Session Token

Q149) What is Data warehouse in AWS?

Data ware house is a central repository for data that can come from one or more sources. Organization
typically use data warehouse to compile reports and search the database using highly complex queries.
Data warehouse also typically updated on a batch schedule multiple times per day or per hour compared
to an OLTP (Online Transaction Processing) relational database that can be updated thousands of times
per second.

Q150) What is mean by Multi-AZ in RDS?

Answer: Multi AZ allows you to place a secondary copy of your database in another availability zone
for disaster recovery purpose. Multi AZ deployments are available for all types of Amazon RDS
Database engines. When you create s Multi-AZ DB instance a primary instance is created in one
Availability Zone and a secondary instance is created by another Availability zone.

Q151) What is Amazon Dynamo DB?

Answer: Amazon Dynamo DB is fully managed NoSQL database service that provides fast and
predictable performance with seamless scalability. Dynamo DB makes it simple and Cost effective to
store and retrieve any amount of data.

Q152) What is cloud formation?

Answer: Cloud formation is a service which creates the AWS infrastructure using code. It helps to
reduce time to manage resources. We can able to create our resources Quickly and faster.

Q153) How to plan Auto scaling?

Answer:

• Manual Scaling
• Scheduled Scaling
• Dynamic Scaling

Q154) What is Auto Scaling group?

Answer: Auto Scaling group is a collection of Amazon EC2 instances managed by the Auto scaling
service. Each auto scaling group contains configuration options that control when auto scaling should
launch new instance or terminate existing instance.

Q155) Differentiate Basic and Detailed monitoring in cloud watch?

Answer:

Basic Monitoring: Basic monitoring sends data points to Amazon cloud watch every five minutes for
a limited number of preselected metrics at no charge.

Detailed Monitoring: Detailed monitoring sends data points to amazon CloudWatch every minute and
allows data aggregation for an additional charge.

Q156) What is the relationship between Route53 and Cloud front?

Answer: In Cloud front we will deliver content to edge location wise so here we can use Route 53 for
Content Delivery Network. Additionally, if you are using Amazon CloudFront you can configure
Route 53 to route Internet traffic to those resources.

Q157) What are the routing policies available in Amazon Route53?


Answer:

• Simple
• Weighted
• Latency Based
• Failover
• Geolocation

Q158) What is Amazon ElastiCache?

Answer: Amazon ElastiCache is a web services that simplifies the setup and management of
distributed in memory caching environment.

• Cost Effective
• High Performance
• Scalable Caching Environment
• Using Memcached or Redis Cache Engine
Q159)What is SES, SQS and SNS?

Answer: SES (Simple Email Service): SES is SMTP server provided by Amazon which is designed to
send bulk mails to customers in a quick and cost-effective manner.SES does not allows to configure
mail server.

SQS (Simple Queue Service): SQS is a fast, reliable and scalable, fully managed message queuing
service. Amazon SQS makes it simple and cost Effective. It’s temporary repository for messages to
waiting for processing and acts as a buffer between the component producer and the consumer.

SNS (Simple Notification Service): SNS is a web service that coordinates and manages the delivery or
sending of messages to recipients.

Q160) How To Use Amazon Sqs? What Is Aws?

Answer:Amazon Web Services is a secure cloud services stage, offering compute power, database
storage, content delivery and other functionality to help industries scale and grow.

Q161) What is the importance of buffer in AWS?

Answer:low price – Consume only the amount of calculating, storage and other IT devices needed. No
long-term assignation, minimum spend or up-front expenditure is required.

Elastic and Scalable – Quickly Rise and decrease resources to applications to satisfy customer
demand and control costs. Avoid provisioning maintenance up-front for plans with variable
consumption speeds or low lifetimes.

Q162) What is the way to secure data for resounding in the cloud?

Answer:

• Avoid storage sensitive material in the cloud. …


• Read the user contract to find out how your cloud service storing works. …
• Be serious about passwords. …
• Encrypt. …
• Use an encrypted cloud service.

Q163) Name The Several Layers Of Cloud Computing?

Answer:Cloud computing can be damaged up into three main services: Software-as-a-Service (SaaS),
Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS). PaaS in the middle, and IaaS on
the lowest

Q164) What Is Lambda edge In Aws?

Answer:Lambda Edge lets you run Lambda functions to modify satisfied that Cloud Front delivers,
executing the functions in AWS locations closer to the viewer. The functions run in response to Cloud
Front events, without provisioning or managing server.

Q165) Distinguish Between Scalability And Flexibility?

Answer:Cloud computing offers industries flexibility and scalability when it comes to computing
needs:
Flexibility. Cloud computing agrees your workers to be more flexible – both in and out of the
workplace. Workers can access files using web-enabled devices such as smartphones, laptops and
notebooks. In this way, cloud computing empowers the use of mobile technology.

One of the key assistances of using cloud computing is its scalability. Cloud computing allows your
business to easily expensive or downscale your IT requests as and when required. For example, most
cloud service workers will allow you to increase your existing resources to accommodate increased
business needs or changes. This will allow you to support your commercial growth without exclusive
changes to your present IT systems.

Q166) What is IaaS?

Answer:IaaS is a cloud service that runs services on “pay-for-what-you-use” basis

IaaS workers include Amazon Web Services, Microsoft Azure and Google Compute Engine

Users: IT Administrators

Q167) What is PaaS?

Answer:PaaS runs cloud platforms and runtime environments to develop, test and manage software

Users: Software Developers

Q168) What is SaaS?

Answer:In SaaS, cloud workers host and manage the software application on a pay-as-you-go pricing
model

Users: End Customers

Q169) Which Automation Gears Can Help With Spinup Services?


Answer:The API tools can be used for spin up services and also for the written scripts. Persons scripts
could be coded in Perl, bash or other languages of your preference. There is one more option that is
flowery management and stipulating tools such as a dummy or improved descendant. A tool called
Scalar can also be used and finally we can go with a controlled explanation like a Right scale. Which
automation gears can help with pinup service.

Q170) What Is an Ami? How Do I Build One?

Answer:An Amazon Machine Image (AMI) explains the programs and settings that will be applied
when you launch an EC2 instance. Once you have finished organizing the data, services, and
submissions on your ArcGIS Server instance, you can save your work as a custom AMI stored in
Amazon EC2. You can scale out your site by using this institution AMI to launch added instances Use
the following process to create your own AMI using the AWS Administration Console:

*Configure an EC2 example and its attached EBS volumes in the exact way you want them created in
the custom AMI.

1. Log out of your instance, but do not stop or terminate it.


2. Log in to the AWS Management Console, display the EC2 page for your region, then click
Instances.
3. Choose the instance from which you want to create a custom AMI.
4. Click Actions and click Create Image.
5. Type a name for Image Name that is easily identifiable to you and, optionally, input text for
Image Description.
6. Click Create Image.

Read the message box that appears. To view the AMI standing, go to the AMIs page. Here you can
see your AMI being created. It can take a though to create the AMI. Plan for at least 20 minutes, or
slower if you’ve connected a lot of additional applications or data.

Q171) What Are The Main Features Of Amazon Cloud Front?

Answer:Amazon Cloud Front is a web service that speeds up delivery of your static and dynamic web
content, such as .html, .css, .js, and image files, to your users.CloudFront delivers your content through
a universal network of data centers called edge locations

Q172) What Are The Features Of The Amazon Ec2 Service?

Answer:Amazon Elastic Calculate Cloud (Amazon EC2) is a web service that provides secure,
resizable compute capacity in the cloud. It is designed to make web-scale cloud calculating easier for
designers. Amazon EC2’s simple web serviceinterface allows you to obtain and configure capacity
with minimal friction.

Q173) Explain Storage For Amazon Ec2 Instance.?

Answer:An instance store is a provisional storing type located on disks that are physically attached to
a host machine. … This article will present you to the AWS instance store storage type, compare it to
AWS Elastic Block Storage (AWS EBS), and show you how to backup data stored on instance stores
to AWS EBS

Amazon SQS is a message queue service used by scattered requests to exchange messages through a
polling model, and can be used to decouple sending and receiving components
Q174) When attached to an Amazon VPC which two components provide connectivity with
external networks?

Answer:

• Internet Gateway {IGW)


• Virtual Private Gateway (VGW)

Q175) Which of the following are characteristics of Amazon VPC subnets?

Answer:

• Each subnet maps to a single Availability Zone.


• By defaulting, all subnets can route between each other, whether they are private or public.

Q176) How can you send request to Amazon S3?

Answer:Every communication with Amazon S3 is either genuine or anonymous. Authentication is a


process of validating the individuality of the requester trying to access an Amazon Web Services
(AWS) product. Genuine requests must include a autograph value that authenticates the request
sender. The autograph value is, in part, created from the requester’s AWS access keys (access key
identification and secret access key).

Q177) What is the best approach to anchor information for conveying in the cloud ?

Answer:Backup Data Locally. A standout amongst the most vital interesting points while overseeing
information is to guarantee that you have reinforcements for your information,

• Avoid Storing Sensitive Information. …


• Use Cloud Services that Encrypt Data. …
• Encrypt Your Data. …
• Install Anti-infection Software. …
• Make Passwords Stronger. …
• Test the Security Measures in Place.

Q178) What is AWS Certificate Manager ?

Answer:AWS Certificate Manager is an administration that lets you effortlessly arrangement, oversee,
and send open and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) endorsements
for use with AWS administrations and your inward associated assets. SSL/TLS declarations are
utilized to anchor arrange interchanges and set up the character of sites over the Internet and
additionally assets on private systems. AWS Certificate Manager expels the tedious manual procedure
of obtaining, transferring, and reestablishing SSL/TLS endorsements.

Q179) What is the AWS Key Management Service

Answer:AWS Key Management Service (AWS KMS) is an overseen benefit that makes it simple for
you to make and control the encryption keys used to scramble your information. … AWS KMS is
additionally coordinated with AWS CloudTrail to give encryption key use logs to help meet your
inspecting, administrative and consistence needs.

Q180) What is Amazon EMR ?


Answer:Amazon Elastic MapReduce (EMR) is one such administration that gives completely oversaw
facilitated Hadoop system over Amazon Elastic Compute Cloud (EC2).

Q181) What is Amazon Kinesis Firehose ?

Answer:Amazon Kinesis Data Firehose is the least demanding approach to dependably stack gushing
information into information stores and examination devices. … It is a completely overseen benefit
that consequently scales to coordinate the throughput of your information and requires no continuous
organization

Q182) What Is Amazon CloudSearch and its highlights ?

Answer:Amazon CloudSearch is a versatile cloud-based hunt benefit that frames some portion of
Amazon Web Services (AWS). CloudSearch is normally used to incorporate tweaked seek abilities
into different applications. As indicated by Amazon, engineers can set a pursuit application up and
send it completely in under 60 minutes.
Q183) Is it feasible for an EC2 exemplary occurrence to wind up an individual from a virtual
private cloud?

Answer:Amazon Virtual Private Cloud (Amazon VPC) empowers you to characterize a virtual system
in your very own consistently disengaged zone inside the AWS cloud, known as a virtual private
cloud (VPC). You can dispatch your Amazon EC2 assets, for example, occasions, into the subnets of
your VPC. Your VPC nearly looks like a conventional system that you may work in your very own
server farm, with the advantages of utilizing adaptable foundation from AWS. You can design your
VPC; you can choose its IP address extend, make subnets, and arrange course tables, organize portals,
and security settings. You can interface occurrences in your VPC to the web or to your own server
farm

Q184) Mention crafted by an Amazon VPC switch.

Answer:VPCs and Subnets. A virtual private cloud (VPC) is a virtual system committed to your AWS
account. It is consistently segregated from other virtual systems in the AWS Cloud. You can dispatch
your AWS assets, for example, Amazon EC2 cases, into your VPC.

Q185) How would one be able to associate a VPC to corporate server farm?

Answer:AWS Direct Connect empowers you to safely associate your AWS condition to your
onpremises server farm or office area over a standard 1 gigabit or 10 gigabit Ethernet fiber-optic
association. AWS Direct Connect offers committed fast, low dormancy association, which sidesteps
web access suppliers in your system way. An AWS Direct Connect area gives access to Amazon Web
Services in the locale it is related with, and also access to different US areas. AWS Direct Connect
enables you to consistently parcel the fiber-optic associations into numerous intelligent associations
called Virtual Local Area Networks (VLAN). You can exploit these intelligent associations with
enhance security, separate traffic, and accomplish consistence necessities.

Q186) Is it conceivable to push off S3 with EC2 examples ?

Answer:Truly, it very well may be pushed off for examples with root approaches upheld by local
event stockpiling. By utilizing Amazon S3, engineers approach the comparative to a great degree
versatile, reliable, quick, low-valued information stockpiling substructure that Amazon uses to follow
its own overall system of sites. So as to perform frameworks in the Amazon EC2 air, engineers utilize
the instruments giving to stack their Amazon Machine Images (AMIs) into Amazon S3 and to
exchange them between Amazon S3 and Amazon EC2. Extra use case may be for sites facilitated on
EC2 to stack their stationary substance from S3.

Q187) What is the distinction between Amazon S3 and EBS ?

Answer:EBS is for mounting straightforwardly onto EC2 server examples. S3 is Object Oriented
Storage that isn’t continually waiting be gotten to (and is subsequently less expensive). There is then
much less expensive AWS Glacier which is for long haul stockpiling where you don’t generally hope
to need to get to it, however wouldn’t have any desire to lose it.

There are then two principle kinds of EBS – HDD (Hard Disk Drives, i.e. attractive turning circles),
which are genuinely ease back to access, and SSD, which are strong state drives which are excessively
quick to get to, yet increasingly costly.

• Finally, EBS can be purchased with or without Provisioned IOPS.


• Obviously these distinctions accompany related estimating contrasts, so it merits focusing on
the distinctions and utilize the least expensive that conveys the execution you require.
Q188) What do you comprehend by AWS?

Answer:This is one of the generally asked AWS engineer inquiries questions. This inquiry checks
your essential AWS learning so the appropriate response ought to be clear. Amazon Web Services
(AWS) is a cloud benefit stage which offers figuring power, investigation, content conveyance,
database stockpiling, sending and some different administrations to help you in your business
development. These administrations are profoundly versatile, solid, secure, and cheap distributed
computing administrations which are plot to cooperate and, applications in this manner made are
further developed and escalade.

Q189) Clarify the principle components of AWS?

Answer:The principle components of AWS are:

Highway 53: Route53 is an exceptionally versatile DNS web benefit.

Basic Storage Service (S3): S3 is most generally utilized AWS stockpiling web benefit.

Straightforward E-mail Service (SES): SES is a facilitated value-based email benefit and enables one
to smoothly send deliverable messages utilizing a RESTFUL API call or through an ordinary SMTP.

Personality and Access Management (IAM): IAM gives enhanced character and security the board for
AWS account.

Versatile Compute Cloud (EC2): EC2 is an AWS biological community focal piece. It is in charge of
giving on-request and adaptable processing assets with a “pay as you go” estimating model.

Flexible Block Store (EBS): EBS offers consistent capacity arrangement that can be found in
occurrences as a customary hard drive.

CloudWatch: CloudWatch enables the controller to viewpoint and accumulate key measurements and
furthermore set a progression of cautions to be advised if there is any inconvenience.

This is among habitually asked AWS engineer inquiries questions. Simply find the questioner psyche
and solution appropriately either with parts name or with the portrayal alongside.
Q190) I’m not catching your meaning by AMI? What does it incorporate?

Answer:You may run over at least one AMI related AWS engineer inquiries amid your AWS designer
meet. Along these lines, set yourself up with a decent learning of AMI.

AMI represents the term Amazon Machine Image. It’s an AWS format which gives the data (an
application server, and working framework, and applications) required to play out the dispatch of an
occasion. This AMI is the duplicate of the AMI that is running in the cloud as a virtual server. You
can dispatch occurrences from the same number of various AMIs as you require. AMI comprises of
the followings:

A pull volume format for a current example

Launch authorizations to figure out which AWS records will inspire the AMI so as to dispatch the
occasions
Mapping for square gadget to compute the aggregate volume that will be appended to the example at
the season of dispatch

Q191) Is vertically scale is conceivable on Amazon occurrence?

Answer:Indeed, vertically scale is conceivable on Amazon example.

This is one of the normal AWS engineer inquiries questions. In the event that the questioner is hoping
to find a definite solution from you, clarify the system for vertical scaling.

Q192) What is the association among AMI and Instance?

Answer:Various sorts of examples can be propelled from one AMI. The sort of an occasion for the
most part manages the equipment segments of the host PC that is utilized for the case. Each kind of
occurrence has unmistakable registering and memory adequacy.

When an example is propelled, it gives a role as host and the client cooperation with it is same
likewise with some other PC however we have a totally controlled access to our occurrences. AWS
engineer inquiries questions may contain at least one AMI based inquiries, so set yourself up for the
AMI theme exceptionally well.

Q193) What is the distinction between Amazon S3 and EC2?

Answer:The contrast between Amazon S3 and EC2 is given beneath:

Amazon S3

Amazon EC2

The significance of S3 is Simple Storage Service. The importance of EC2 is Elastic Compute Cloud.

It is only an information stockpiling administration which is utilized to store huge paired files. It is a
cloud web benefit which is utilized to have the application made.

It isn’t required to run a server. It is sufficient to run a server.


It has a REST interface and utilizations secure HMAC-SHA1 validation keys. It is much the same as a
tremendous PC machine which can deal with application like Python, PHP, Apache and some other
database.

When you are going for an AWS designer meet, set yourself up with the ideas of Amazon S3 and
EC2, and the distinction between them.

Q194) What number of capacity alternatives are there for EC2 Instance?

Answer:There are four stockpiling choices for Amazon EC2 Instance:

• Amazon EBS
• Amazon EC2 Instance Store
• Amazon S3
• Adding Storage
Amazon EC2 is the basic subject you may run over while experiencing AWS engineer inquiries
questions. Get a careful learning of the EC2 occurrence and all the capacity alternatives for the EC2
case.

Q195) What are the security best practices for Amazon Ec2 examples?

Answer:There are various accepted procedures for anchoring Amazon EC2 occurrences that are
pertinent whether occasions are running on-preface server farms or on virtual machines. How about
we view some broad prescribed procedures:

Minimum Access: Make beyond any doubt that your EC2 example has controlled access to the case
and in addition to the system. Offer access specialists just to the confided in substances.

Slightest Privilege: Follow the vital guideline of minimum benefit for cases and clients to play out the
capacities. Produce jobs with confined access for the occurrences.

Setup Management: Consider each EC2 occasion a design thing and use AWS arrangement the
executives administrations to have a pattern for the setup of the occurrences as these administrations
incorporate refreshed enemy of infection programming, security highlights and so forth.

Whatever be the activity job, you may go over security based AWS inquiries questions. Along these
lines, motivate arranged with this inquiry to break the AWS designer meet.

Q196) Clarify the highlights of Amazon EC2 administrations.

Answer:Amazon EC2 administrations have following highlights:

• Virtual Computing Environments


• Proffers Persistent capacity volumes
• Firewall approving you to indicate the convention
• Pre-designed layouts
• Static IP address for dynamic Cloud Computing

Q197) What is the system to send a demand to Amazon S3?

Answer: Reply: There are 2 different ways to send a demand to Amazon S3 –


• Using REST API
• Using AWS SDK Wrapper Libraries, these wrapper libraries wrap the REST APIs for
Amazon

Q198) What is the default number of basins made in AWS?

Answer:This is an extremely straightforward inquiry yet positions high among AWS engineer inquiries
questions. Answer this inquiry straightforwardly as the default number of pails made in each AWS
account is 100.

Q199) What is the motivation behind T2 examples?

Answer:T2 cases are intended for

Providing moderate gauge execution


Higher execution as required by outstanding task at hand

Q200) What is the utilization of the cradle in AWS?

Answer:This is among habitually asked AWS designer inquiries questions. Give the appropriate
response in straightforward terms, the cradle is primarily used to oversee stack with the
synchronization of different parts i.e. to make framework blame tolerant. Without support, segments
don’t utilize any reasonable technique to get and process demands. Be that as it may, the cushion
makes segments to work in a decent way and at a similar speed, hence results in quicker
administrations.

Q201) What happens when an Amazon EC2 occurrence is halted or ended?

Answer:At the season of ceasing an Amazon EC2 case, a shutdown is performed in a typical way.
From that point onward, the changes to the ceased state happen. Amid this, the majority of the
Amazon EBS volumes are stayed joined to the case and the case can be begun whenever. The
occurrence hours are not included when the occasion is the ceased state.

At the season of ending an Amazon EC2 case, a shutdown is performed in an ordinary way. Amid
this, the erasure of the majority of the Amazon EBS volumes is performed. To stay away from this,
the estimation of credit deleteOnTermination is set to false. On end, the occurrence additionally
experiences cancellation, so the case can’t be begun once more.

Q202) What are the mainstream DevOps devices?

Answer:In an AWS DevOps Engineer talk with, this is the most widely recognized AWS inquiries for
DevOps. To answer this inquiry, notice the well known DevOps apparatuses with the kind of
hardware –

• Jenkins – Continuous Integration Tool


• Git – Version Control System Tool
• Nagios – Continuous Monitoring Tool
• Selenium – Continuous Testing Tool
• Docker – Containerization Tool
• Puppet, Chef, Ansible – Deployment and Configuration Administration Tools.
Q203) What are IAM Roles and Policies, What is the difference between IAM Roles and
Policies.

Answer:Roles are for AWS services, Where we can assign permission of some AWS service to other
Service.

Example – Giving S3 permission to EC2 to access S3 Bucket Contents.

Policies are for users and groups, Where we can assign permission to user’s and groups.

Example – Giving permission to user to access the S3 Buckets.

Q204) What are the Defaults services we get when we create custom AWS VPC?

Answer:

• Route Table
• Network ACL
• Security Group

Q205) What is the Difference Between Public Subnet and Private Subnet ?

Answer:Public Subnet will have Internet Gateway Attached to its associated Route Table and Subnet,
Private Subnet will not have the Internet Gateway Attached to its associated Route Table and Subnet

Public Subnet will have internet access and Private subnet will not have the internet access directly.

Q206) How do you access the Ec2 which has private IP which is in private Subnet ?

Answer: We can access using VPN if the VPN is configured into that Particular VPC where Ec2 is
assigned to that VPC in the Subnet. We can access using other Ec2 which has the Public access.

Q207) We have a custom VPC Configured and MYSQL Database server which is in Private
Subnet and we need to update the MYSQL Database Server, What are the Option to do so.

Answer:By using NAT Gateway in the VPC or Launch a NAT Instance ( Ec2) Configure or Attach
the NAT Gateway in Public Subnet ( Which has Route Table attached to IGW) and attach it to the
Route Table which is Already attached to the Private Subnet.

Q208) What are the Difference Between Security Groups and Network ACL

Answer:

Security Groups Network ACL


Attached to Ec2 instance Attached to a subnet.

Stateful – Changes made in Stateless – Changes made incoming


rules is automatically in incoming rules is
not applied to the outgoing
applied to the outgoing rule rule
Blocking IP Address can’t be
IP Address can be Blocked done
Allow rules only, by default all Allow and Deny can be rules
are denied Used.

Q209) What are the Difference Between Route53 and ELB?

Answer:Amazon Route 53 will handle DNS servers. Route 53 give you web interface through which
the DNS can be managed using Route 53, it is possible to direct and failover traffic. This can be
achieved by using DNS Routing Policy.

One more routing policy is Failover Routing policy. we set up a health check to monitor your
application endpoints. If one of the endpoints is not available, Route 53 will automatically forward the
traffic to other endpoint.

Elastic Load Balancing

ELB automatically scales depends on the demand, so sizing of the load balancers to handle more
traffic effectively when it is not required.
Q210) What are the DB engines which can be used in AWS RDS?

Answer:

• MariaDB
• MYSQL DB
• MS SQL DB
• Postgre DB
• Oracle DB

Q211) What is Status Checks in AWS Ec2?

Answer: System Status Checks – System Status checks will look into problems with instance which
needs AWS help to resolve the issue. When we see system status check failure, you can wait for AWS
to resolve the issue, or do it by our self.

• Network connectivity
• System power
• Software issues Data Centre’s
• Hardware issues
• Instance Status Checks – Instance Status checks will look into issues which need our
involvement to fix the issue. if status check fails, we can reboot that particular instance.
• Failed system status checks
• Memory Full
• Corrupted file system
• Kernel issues

Q212) To establish a peering connections between two VPC’s What condition must be met?

Answer:

• CIDR Block should overlap


• CIDR Block should not overlap
• VPC should be in the same region
• VPC must belong to same account.
• CIDR block should not overlap between vpc setting up a peering connection . peering
connection is allowed within a region , across region, across different account.

Q213) Troubleshooting with EC2 Instances:


Answer: Instance States

• If the instance state is 0/2- there might be some hardware issue • If the instance state is ½-
there might be issue with OS.
Workaround-Need to restart the instance, if still that is not working logs will help to fix the
issue.

Q214) How EC2instances can be resized.

Answer: EC2 instances can be resizable(scale up or scale down) based on requirement

Q215) EBS: its block-level storage volume which we can use after mounting with EC2 instances.

Answer:For types please refer AWS Solution Architect book.


Q216) Difference between EBS,EFS and S3

Answer:

• We can access EBS only if its mounted with instance, at a time EBS can be mounted only
with one instance.
• EFS can be shared at a time with multiple instances
• S3 can be accessed without mounting with instances

Q217) Maximum number of bucket which can be crated in AWS.

Answer:100 buckets can be created by default in AWS account.To get more buckets additionally you
have to request Amazon for that.

Q218) Maximum number of EC2 which can be created in VPC.

Answer:Maximum 20 instances can be created in a VPC. we can create 20 reserve instances and
request for spot instance as per demand.

Q219) How EBS can be accessed?

Answer:EBS provides high performance block-level storage which can be attached with running EC2
instance. Storage can be formatted and mounted with EC2 instance, then it can be accessed.
Q220)Process to mount EBS to EC2 instance

Answer:

• Df –k
• mkfs.ext4 /dev/xvdf
• Fdisk –l
• Mkdir /my5gbdata
• Mount /dev/xvdf /my5gbdata

Q221) How to add volume permanently with instance.

Answer:With each restart volume will get unmounted from instance, to keep this attached need to
perform below step

Cd /etc/fstab

/dev/xvdf /data ext4 defaults 0

0 <edit the file system name accordingly>

Q222) What is the Difference between the Service Role and SAML Federated Role.

Answer: Service Role are meant for usage of AWS Services and based upon the policies attached to
it,it will have the scope to do its task. Example : In case of automation we can create a service role and
attached to it.
Federated Roles are meant for User Access and getting access to AWS as per designed role. Example
: We can have a federated role created for our office employee and corresponding to that a Group will
be created in the AD and user will be added to it.

Q223) How many Policies can be attached to a role.

Answer: 10 (Soft limit), We can have till 20.

Q224) What are the different ways to access AWS.

Answer:3 Different ways (CLI, Console, SDK)

Q225) How a Root AWS user is different from in IAM User.

Answer: Root User will have acces to entire AWS environment and it will not have any policy
attached to it. While IAM User will be able to do its task on the basis of policies attached to it.
Q226) What do you mean by Principal of least privilege in term of IAM.

Answer: Principal of least privilege means to provide the same or equivalent permission to the
user/role.

Q227)What is the meaning of non-explicit deny for an IAM User.

Answer: When an IAM user is created and it is not having any policy attached to it,in that case he will
not be able to access any of the AWS Service until a policy has been attached to it.

Q228) What is the precedence level between explicit allow and explicit deny.

Answer: Explicit deny will always override Explicit Allow.

Q229) What is the benefit of creating a group in IAM.


Answer:Creation of Group makes the user management process much simpler and user with the same
kind of permission can be added in a group and at last addition of a policy will be much simpler to the
group in comparison to doing the same thing manually.

Q230) What is the difference between the Administrative Access and Power User Access in term
of pre-build policy.

Answer: Administrative Access will have the Full access to AWS resources. While Power User
Access will have the Admin access except the user/group management permission.

Q231) What is the purpose of Identity Provider.

Answer: Identity Provider helps in building the trust between the AWS and the Corporate AD
environment while we create the Federated role.

Q232) What are the benefits of STS (Security Token Service).

Answer: It help in securing the AWS environment as we need not to embed or distributed the AWS
Security credentials in the application. As the credentials are temporary we need not to rotate them
and revoke them.
Q233) What is the benefit of creating the AWS Organization.

Answer: It helps in managing the IAM Policies, creating the AWS Accounts programmatically, helps
in managing the payment methods and consolidated billing.

Q234) What is the maximum file length in S3?

Answer: utf-8 1024 bytes

Q235) which activity cannot be done using autoscaling?

Answer:Maintain fixed running of ec2

Q236) How will you secure data at rest in EBS?

Answer: EBS data is always secure

Q237) What is the maximum size of S3 Bucket?

Answer: 5TB

Q238) Can objects in Amazon s3 be delivered through amazon cloud front?

Answer:Yes

Q239) which service is used to distribute content to end user service using global network of
edge location?

Answer: Virtual Private Cloud

Q240) What is ephemaral storage?

Answer: Temporary storage


Q241) What are shards in kinesis aws services?

Answer: Shards are used to store data in Kinesis.

Q242) Where can you find the ephemeral storage?

Answer: In Instance store service.

Q243) I have some private servers on my premises also i have distributed some of My workload
on the public cloud,what is the architecture called?

Answer:Virtual private cloud

Q244)Route 53 can be used to route users to infrastructure outside of aws.True/false?

Answer: False

Q245) Is simple workflow service one of the valid Simple Notification Service subscribers?

Answer: No

Q246) which cloud model do Developers and organizations all around the world leverage
extensively?

Answer: IAAS-Infrastructure as a service.

Q247) Can cloud front serve content from a non AWS origin server?

Answer: No

Q248) Is EFS a centralised storage service in AWS?

Answer: Yes

Q249) Which AWS service will you use to collect and process ecommerce data for near real time
analysis?

Answer: Both Dynamo DB & Redshift

Q250)An high demand of IOPS performance is expected around 15000.Which EBS volume type
would you recommend?

Answer: Provisioned IOPS.


DEVOPS Q&A

MANAS KUMAR JHA


Link to Join Our WhatsApp group
MANAS JHA Follow me on-LINKEDIN WhatsApp Link

DevOps
1. What is Source Code Management?
It is a process through which we can store and manage any code. Developers write code, Testers
write test cases and DevOps engineers write scripts. This code, we can store and manage in Source
Code Management. Different teams can store code simultaneously. It saves all changes separately.
We can retrieve this code at any point of time.

2. What are the Advantages of Source Code Management?


Helps in Achieving teamwork. Can work on different features simultaneously. Acts like pipeline b/w
offshore & onshore teams. Track changes (Minute level). Different people from the same team, as
well as different teams, can store code simultaneously (Save all changes separately)

2.1. Available Source Code Management tools in the market?


There are so many Source Code Management tools available in the market. Those are
. Git. SVN. Perforce. Clear case
Out of all these tools, Git is the most advanced tool in the market where we are getting so many
advantages compared to other Source Code Management tools.

3. What is Git?
Git is one of the Source Code Management tools where we can store any type of code. Git is the most
advanced tool in the market now. We also call Git is version control system because every update
stored as a new version. At any point of time, we can get any previous version. We can go back to
previous versions. Every version will have a unique number. That number we call commit-ID. By using
this commit ID, we can track each change i.e. who did what at what time. For every version, it takes
incremental backup instead of taking the whole backup. That’s why Git occupies less space. Since it
is occupying less space, it is very fast.

4. What are the advantages of Git?


. Speed: - Git stores every update in the form of versions. For every version, it takes incremental
backup instead of taking the whole backup. Since it is taking less space, Git is very fast. That
incremental backup we call “Snapshot”
. Parallel branching: - We can create any number of branches as per our requirement. No need to
take prior permission from any one, unlike other Source Code Management tools. Branching is for
parallel development. Git branches allow us to work simultaneously on multiple features.
. Fully Distributed: - A backup copy is available in multiple locations in each and everyone’s server
instead of keeping in one central location, unlike other Source Code Management tools. So even if
we lose data from one server, we can recover it easily. That’s why we call GIT as DVCS (Distributed
Version Control System)

5. What are the stages in Git?


There are total of 4 stages in Git
1. Workspace: - It is the place where we can create files physically and modify. Being a Git user, we
work in this work space.
2. Staging area/Indexing area: - In this area, Git takes a snapshot for every version. It is a buffer zone
between workspace and local repository. We can’t see this region because it is virtual.

1| Link to Join Our WhatsApp group


MANAS JHA Follow me on-LINKEDIN WhatsApp Link

3. Local repository: - It is the place where Git stores all commit locally. It is a hidden directory so that
no one can delete it accidentally. Every commit will have unique commit ID.
4. Central repository: - It is the place where Git stores all commit centrally. It belongs to everyone
who is working in your project. Git Hub is one of the central repositories. Used for storing the code
and sharing the code to others in the team.

6. What is the common branching strategy in Git?


• Product is the same, so one repo. But different features.
• Each feature has one separate branch
• Finally, merge (code) all branches
• For Parallel development
• Can create any no of branches
• Can create one branch on the basis of another branch
• Changes are personal to that particular branch
• Can put files only in branches (not in repo directly)
• The default branch is “Master”
• Files created in a workspace will be visible in any of the branch workspaces until you commit. Once
you commit, then that file belongs to that particular branch.

7. How many types of repositories available in Git?


There are two types of repositories available in Git
Bare Repositories (Central) These repositories are only for Storing & Sharing the code All central
repositories are bare repositories
Non – Bare Repositories (Local) In these repositories, we can modify the files All local /user
repositories are Bare Repositories

8. Can you elaborate commit in Git?


• Storing file permanently in the local repository we call commit.
• For every commit, we get one commit ID
• It contains 40 long Alpha-numeric characters
• It uses the concept “Check some” (It’s a tool in Linux, generates binary value equal to the data
present in file)
• Even if you change one dot, Commit-ID will get changed
• Helps in tracking the changes

9. What do you mean by “Snapshot” in Git?


• It is a backup copy for each version git stores in a repository.
• Snapshot is an incremental backup copy (only backup for new changes)
• Snapshot represents some data of particular time so that, we can get data of particular time by
taking that particular snapshot
• This snapshot will be taken in Staging area in Git which is present between Git workspace and Git
local repository.

10. What is GitHub?


Git hub is central git repository where we can store code centrally. Git hub belongs to Microsoft
Company. We can create any number of repositories in Git hub. All public repositories are free and
can be accessible by everyone. Private repositories are not free and can restrict public access for

2| Link to Join Our WhatsApp group


MANAS JHA Follow me on-LINKEDIN WhatsApp Link

security. We can copy the repository from one account to other accounts also. This process we call
as “Fork”. In this repository also we can create branches. The default branch is “Master”

11. What is Git merge?


By default, we get one branch in git local repository called “Master”. We can create any no of
branches for parallel development. We write code for each feature in each branch so that
development happens separately. Finally, we merge code off all branches in to Master and push to
central repository. We can merge code to any other branch as well. But merging code into master is
standard practice that being followed widely. Sometimes, while merging, conflict occurs. When same
file is in different branches with different code, when try to merge those branches, conflict occurs.
We need to resolve that conflict manually by rearranging the code.

12. What is Git stash?


We create multiple branches to work simultaneously on multiple features. But to work on multiple
tasks simultaneously in one branch (i.e. on one feature), we use git stash. Stash is a temporary
repository where we can store our content and bring it back whenever we want to continue with our
work with that stored content. It removes content inside file from working directory and puts in
stashing store and gives clean working directory so that we can start new work freshly. Later on you
can bring back that stashed items to working directory and can resume your work on that file. Git
stash applicable to modified files. Not new files. Once we finish our work, we can remove all stashed
items form stash repository.

13. What is Git Reset?


Git Reset command is used to remove changes form staging area. This is bringing back file form
staging area to work directory. We use this command before commit. Often we go with git add
accidentally. In this case if we commit, that file will be committed. Once you commit, commit ID will
be generated and it will be in the knowledge of everyone. So to avoid this one, we use Git reset. If
you add “–hard” flag to git reset command, in one go, file will be removed from staging area as well
as working directory. We generally go with this one if we fell that something wrong in the file itself.

15. What is Git Revert?


Git Revert command is used to remove changes from all 3 stages (work directory, staging area and
local repository). We use this command after commit. Sometimes, we commit accidentally and later
on we realize that we shouldn’t have done that. For this we use Git revert. This operation will
generate new commit ID with some meaningful message to ignore previous commit where mistake
is there. But, here we can’t completely eliminate the commit where mistake is there. Because Git
tracks each and every change.

16. Difference between Git pull and Git clone?


We use these two commands to get changes from central repository. For the first time if you want
whole central repository in your local server, we use git clone. It brings entire repository to your local
server. Next time onwards you might want only changes instead of whole repository. In this case, we
use Git pull. Git clone is to get whole copy of central repository Git pull is to get only new changes
from central repository (Incremental data)

3| Link to Join Our WhatsApp group


MANAS JHA Follow me on-LINKEDIN WhatsApp Link

17. What is the difference between Git pull and Fetch?


We use Git pull command to get changes from central repository. In this operation, internally two
commands will get executed. One is Git fetch and another one is Git merge. Git fetch means, only
bringing changes from central repo to local repo. But these changes will not be integrated to local
repo which is there in your server. Git merge means, merging changes to your local repository which
is there in your server. Then only you can see these changes. So Git pull is the combination of Git pull
and Git merge.

18. What is the difference between Git merge and rebase?


We often use these commands to merge code in multiple branches. Both are almost same but few
differences. When you run Git merge, one new merge commit will be generated which is having the
history of both development branches. It preserves the history of both branches. By seeing this merge
commit, everyone will come to know that we merged two branches. If you do Git rebase, commits in
new branch will be applied on top of base branch tip. There won’t be any merge commit here. It
appears that you started working in one single branch form the beginning. This operation will not
preserve the history of new branch.

19. What is Git Bisect?


Git Bisect we use to pick bad commit out of all good commits. Often developers do some mistakes.
For them it is very difficult to pick that commit where mistake is there. They go with building all
commits one by one to pick bad commit. But Git bisect made their lives easy. Git bisect divides all
commits equally in to two parts (bisecting equally). Now instead of building each commit, they go
with building both parts. Where ever bad commit is there, that part build will be failed. We do
operation many times till we get bad commit. So Git bisect allows you to find a bad commit out of
good commits. You don’t have to trace down the bad commit by hand; git-bisect will do that for you.

20. What is Git squash?


To move multiple commits into its parent so that you end up with one commit. If you repeat this
process multiple times, you can reduce “n” number of commits to a single one. Finally we will end up
with only one parent commit. We use this operation just to reduce number of commits.

21. What is Git hooks?


We often call this as web hooks as well. By default we get some configuration files when you install
git. These files we use to set some permissions and notification purpose. We have different types of
hooks (pre commit hooks & post commit hooks)
Pre-commit hooks:- Sometimes you would want every member in your team to follow certain
pattern while giving commit message. Then only it should allow them to commit. These type of
restrictions we call pre-commit hooks.
Post-commit hooks:- Sometimes, being a manager you would want an email notification regarding
every commit occurs in a central repository. This kind of things we call post-commit hooks.
In simple terms, hooks are nothing but scripts to put some restrictions.

22. What is Git cherry-pick?


When you go with git merge, all commits which are there in new development branch will be merged
into current branch where you are. But sometimes, requirement will be in such that you would want
to get only one commit form development branch instead of merging all commits. In this case we go
with git cherry-pick. Git cherry-pick will pick only one commit whatever you select and merges with

4| Link to Join Our WhatsApp group


MANAS JHA Follow me on-LINKEDIN WhatsApp Link

commits which are there in your current branch. So picking particular commit and merging into your
current branch we call git cherry-pick.

23. What is the difference between Git and SVN?


SVN:- It is centralized version control system (CVCS) where back up copy will be placed in only one
central repository. There is no branching strategy in SVN. You can’t create branches. So no parallel
development. There is no local repository. So can’t save anything locally. Every time after writing
code you need to push that code to central repository immediately to save changes.
Git:- It is a Distributed version control system where back up copy is available in everyone’s machine’s
local repository as well as a central repository. We can create any no of branches as we want. So we
can go in parallel development simultaneously. Every Git repository will have its own local repository.
So we can save changes locally. At the end of our work finally, we can push code to a central
repository.

24. What is the commit message in Git?


Every time we commit, while committing, we have to give commit message just to identify each
commit. We can’t remember to commit numbers because they contain 40 long alphanumeric
characters. So, to remember commits easily, we give commit message. The format of commit
message differs from company to company and individual to individual. We have one more way to
identify commits. That is giving “Tags”. Tag is a kind of meaningful name to a particular commit.
Instead of referring to commit ID, we can refer to tags. Internally tag will refer to respective commit
ID. These are the ways to get a particular commit easily.

26. What is Configuration Management?


It is a method through we automate admin tasks. Each and every minute details of a system, we call
configuration details. If we do any change here means we are changing the configuration of a
machine. That means we are managing the configuration of the machine. System administrators used
to manage the configuration of machine through manually. DevOps engineers are managing this
configuration through automated way by using some tools which are available in the market. That’s
why we call these tools as configuration management tools.

27. What is IAC?


IAC means Infrastructure As Code. It is the process through which we automate all admin tasks. Here
we write code in Ruby script in chef. When you apply this code, automatically code will be converted
into Infrastructure. So here we are getting so many advantages in writing the code. Those are 1. Code
is Testable (Testing code is easy compare to Infrastructure) 2. Code is Repeatable (Can re-use the
same code again and again)
3. Code is Versionable (Can store in versions so that can get any previous versions at any time)

28. What do you mean by IT Infrastructure??


IT Infrastructure is a composite of the following things
• Software
• Network
• People
• Process

5| Link to Join Our WhatsApp group


MANAS JHA Follow me on-LINKEDIN WhatsApp Link

29. What are the problems that system admins used to face earlier when there were no
configuration management tools?
1. Managing users & Groups is big hectic thing (create users and groups, delete, edit……) 2. Dealing
with packages (Installing, Upgrading & Uninstalling) 3. Taking backups on regular basis manually 4.
Deploying all kinds of applications in servers 5. Configure services (Starting, stopping and restarting
services) These are some problems that system administrators used to face earlier in their manual
process of managing configuration of any machine.

30. Why should we go with Configuration Management Tool?


1. By using the Configuration Management Tool, we can automate almost each and every admin task.
2. We can increase uptime so that can provide maximum user satisfaction. 3. Improve the
performance of systems. 4. Ensure compliance 5. Prevent errors as tools won’t do any errors 6.
Reduce cost (Buy tool once and use 24/7)

31. How this Configuration Management Tool works?


Whatever system admins (Linux/windows) used to do manually, now we are automating all those
tasks by using any Configuration Management Tool. We can use this tool whether your servers are in
on-premises or in the cloud. It turns your code into infrastructure. So your code is versionable,
repeatable and testable. You only need to tell what the desired configuration should be, not how to
achieve it. Through automation, we get our desired state of server. This is unique feature of
Configuration Management Tool.

32. What is the architecture of Chef?


Chef is an administration tool. In this we have total 3 stages. 1. Chef Workstation (It is the place where
we write code) 2. Chef Server (It is the place where we store code) 3. Chef Node (It is the place where
we apply code) We need to establish communication among workstation, server and nodes. You can
have any no of nodes. There is no limit. Chef can manage any no of nodes effectively.

33. Components of Chef?


Chef Workstation: Where you write the code
Chef Server: Where you upload the code
Chef Node: Where you apply the code
Knife: Tool to establish communication among workstation, server & node.
Chef-client: Tool runs on every chef node to pull code from chef server
Ohai: Maintains current state information of chef node (System Discovery Tool)
Idempotency: Tracking the state of system resources to ensure that the changes should not re-apply
repeatedly.
Chef Supermarket: Where you get custom code

34. How does Chef Works?


We need to install chef package in workstation, server and nodes. We create cookbook in
workstation. Inside cookbook, there will be a default recipe where you write code in ruby script. You
can create any no of recipes. There is no limit. After writing code in recipe, we upload whole cookbook
to chef server. Chef server acts as central hub storing code. Then, we need to add this cookbook’s
recipe to nodes run-list. Chef-client tool will be there in each and every chef node. It runs frequently.
Chef-client comes to chef server and take that code and applies that code in node. This is how code
will be converted into infrastructure.

6| Link to Join Our WhatsApp group


MANAS JHA Follow me on-LINKEDIN WhatsApp Link

35. What is Idempotency?


It is unique feature in all configuration management tools. It ensures that changes should not re-
apply repeatedly. Once chef-client converted code into Infrastructure, then even chef-client runs
again, it will not take any action. It won’t do the same task again and again. If any new changes are
there in that code, then only chef-client is going to take action. So it doesn’t make any difference ever
if you run chef-client any no of times. So tracking the system details to not to reapply changes again
and again, we call Idempotency.

36. What is Ohai and how does it works??


Ohai we call “System Discovery Tool”. It stores system information. It captures each and every minute
details of system and updates it then and there if any new changes are there. Whenever chef-client
converts code in infrastructure in node, immediately Ohai store will be updated. Next time onwards,
before chef-client runs, it verifies in Ohai store to know about current state of information. So chef-
client will come to know the current state of server. Then chef-client acts accordingly. If new changes
are there, then only it will take action. If there are no new changes, then it won’t take any action.
Ohai tool helps in achieving this.

37. How many types of chef server?


Total there are 3 ways through which we can manage chef server. 1. Directly we can take chef server
from Chef Company itself. In this case, everything will be managed by Chef Company. You will get
support from chef. This type of server we call Managed/Hosted chef. This is completely Graphical
User Interface (GUI). It’s not free. We need to pay to Chef Company after exceeding free tier limit. 2.
We can launch one server and we need to install chef server package. It is completely free package.
It’s GUI. 3. We can launch one server and we need to install chef server package. It is completely free
package. It’s CLI (Command Line Interface).

38. What is there inside cookbook??


Below mentioned files and folders will be there inside cookbook when you first create it
Chefignore: like .gitignore (to ignore files and folders)
Kitchen.yml: for testing of cookbook
Metadata.rb: name, author, version…. etc of cookbook Readme.md: information about usage of
cookbook Recipe: It is a file where you write code Spec: for unit test Test: for integration test

39. What is Attributes concept in chef?


Sometimes we might need to deploy web applications to in nodes and for that we need to know
some host specific details of each server like IP Address, Host name ….. etc. Because we need to
mention that in configuration files of each server. These files we call as Configuration files. This
information will be vary from system to system. These host specific details that we mention in
Configuration files,we call “Attributes”. Chef-client tool gathers these Attributes from Ohai store and
puts in configuration files. Instead of hard coding these attributes, we mention as variables so that
every time, file will be updated with latest details of their respective nodes.

40. What is Run-list in Chef?


This is an ordered list of recipes that we are going to apply to nodes. We mention all recipes in
cookbook and then we upload that cookbook to chef server. Then, we attach all recipes to nodes run-
list in sequence order. When chef-client runs, it applies all recipes to nodes in the same order

7| Link to Join Our WhatsApp group


MANAS JHA Follow me on-LINKEDIN WhatsApp Link

whatever the order you mention in run-list. Because sometimes order is important especially when
we deal with dependent recipes.

41. What is bootstrap?


It is the process of adding chef node to chef server or we can call, bringing any machine into chef
environment. In this bootstrapping process total three action will be performed automatically. 1.
Node gets connected to chef server. 2. Chef server will install chef package in chef node. 3. Cookbooks
will be applied to chef node.
It is only one time effort. As and when we purchase any new machine in company, immediately we
add that server to chef server. At a time, we can bootstrap one machine. We can’t bootstrap multiple
machines at a time.

42. What is the workflow of Chef?


We connect chef workstation, chef server and chef node with each other. After that, we create
cookbook in chef workstation and inside that cookbook, we write code in recipe w.r.t. the
infrastructure to be created. Then we upload entire cookbook to chef server and attach that
cookbook’s recipe to nodes run-list. Now we automate chef-client which will be there in all chef
nodes. Chef-client runs frequently towards chef server for new code. So chef-client will get that code
from server and finally applies to chef node. This is how, code is converted into infrastructure. If no
changes are there in code, even if chef-client runs any no of time, it won’t take any action until it
finds some changes in code. This is what we call Idempotency.

43. How does we connect Chef Workstation to Chef Server?


First we download started kit from chef server. This will be downloaded in the form of zip file. If we
extract this zip file, we will get chef-repo folder. This chef-repo folder we need to place in chef
workstation. Inside chef-repo folder, we can see total three folders. They are .chef, cookbooks and
roles. Out of these three, .chef folder is responsible to establish communication between chef server
and chef workstation. Because, inside .chef folder, we can see two files. They are knife.rb and
organization.pem. Inside kinfe.rb, there will be the url (address) of chef server. Because of this url,
communication will be established between chef server and chef workstation. This is how we connect
Chef Workstation to Chef Server.

44. How does the chef-client runs automatically?


By default, chef-client runs manually. So we need to automate this manually. For this, we use “cron
tool” which is the default tool in all Linux machines use to schedule tasks to be executed automatically
at frequent intervals. So in this “crontab” file, we give chef-client command and we need to set the
timing as per our requirement. Then onwards chef-client runs automatically after every frequent
intervals. It is only one time effort. When we purchase any new server in company, along with
bootstrap, we automate chef-client then and there.

45. What is chef supermarket?


Chef supermarket is the place where we get custom cookbooks. Every time we need not to create
cookbooks and need not to write code from scratch. We can go with custom cookbooks which are
available in chef supermarket being provided by chef organization and community. We can download
these cookbooks and modify as per our needs. We get almost each and every cookbook from chef
supermarket. They are safe to use.

8| Link to Join Our WhatsApp group


MANAS JHA Follow me on-LINKEDIN WhatsApp Link

46. What is wrapper cookbook?


Either we can download those chef supermarket cookbooks or without downloading, we can call
these supermarket cookbooks during run time so that every time we get updates automatically for
that cookbook if any new updates are there. Here, we use our own cookbook to call chef supermarket
cookbook. This process of calling cookbook by using another cookbook, we call wrapper cookbook.
Especially, we use this concept to automate chef-client.

47. What is “roles” in chef?


Roles are nothing but a Custom run-list. We create role & upload to chef server & assign them to
nodes. If we have so many nodes, need to add cookbook to run-list of all those nodes, it is very
difficult to attach to all nodes run-list. So, we create role & attach that role to all those nodes once.
Next time onwards, add cookbook to that role. Automatically, that cookbook will be attached to all
those nodes. So role is one time effort. Instead of adding cookbooks to each & every node’s run-list
always, just create a role & attach that role to nodes. When we add cookbook to that role, it will be
automatically applied to all nodes those assigned with that role.

48. What is include_recipe in chef?


By default, we can call one recipe at a time in one cookbook. But if you want to call multiple recipes
from same cookbook, we use include_recipe concept. Here, we take default recipe and we mention
all recipes to be called in this default recipe in an order. If we call default recipe, automatically default
recipe will call all other recipes which are there inside default recipe. By using one recipe, we can call
any no of recipes. This process of calling one recipe by using other recipe, we call as include_recipe.
Here condition is we can call recipes from same cookbook, but not from different cookbooks.
49. How to deploy a web server by using chef?
package ‘httpd’ do action :install end
file ‘/var/www/html/index.html’ do content ‘Hello Dear Students!!’ action :create end
service ‘httpd’ do action [ :enable, :start ] end
50. How to write ruby code to create file, directory?
file ‘/myfile’ do content ‘This is my second file’ action :create owner ‘root’ group ‘root’ end
directory ‘/mydir’ do action :create owner ‘root’ group ‘root’ end
51. How to write ruby code to create user, group and install package?
user ‘user1’ do action: create end
group ‘group1’ do action :create members ‘user1’ append true end
package ‘httpd’ do action: install end

9| Link to Join Our WhatsApp group


MANAS JHA Follow me on-LINKEDIN WhatsApp Link

52. What is container?


The container is like a virtual machine in which we can deploy any type of applications, soft wares
and libraries. It’s a light weight virtual machine which uses OS in the form of image, which is having
less in size compare to traditional VMware and oracle virtual box OS images. Container word has
been taken from shipping containers. It has everything to run an application.
53. What is virtualization?
Logically dividing big machine into multiple virtual machines so that each virtual machine acts as new
server and we can deploy any kind of applications in it. For this first we install any virtualization
software on top of base OS. This virtualization software will divide base machine resources in to
logical components. In a simple terms, logically dividing one machine into multiple machines we call
virtualization.
54. What is Docker?
Docker is a tool by using which, we create containers in less time. Docker uses light weight OS in the
form of docker images that we will get from docker hub. Docker is open source now. It became so
popular because of its unique virtualization concept called “Containerization” which is not there in
other tools. We can use docker in both windows and Linux machines.

10 | Link to Join Our WhatsApp group


MANAS JHA Follow me on-LINKEDIN WhatsApp Link

55. What do you mean by docker image?


Docker image is light weight OS provided by docker company. We can get any type of docker image
form docker hub. We use these docker images to create docker containers. This docker images may
contain only OS or OS + other soft wares as well. Each software in docker image, will be stored in the
form of layer. Advantage of using docker images is, we can replicate the same environment any no
of times.
56. What are the ways through which we can create docker images?
There are three ways through which we can create docker images. 1. We can take any type of docker
image directly from docker hub being provided by docker company and docker community. 2. We
can create our own docker images form our own docker containers. I.e. first we create container
form base docker image taken form docker hub and then by going inside container, we install all
required soft wares and then create docker image from our own docker container. 3. We can create
docker image form docker file. It is the most preferred way of creating docker images.
57. What is docker file and why do we use it?
It is a just normal text file with instructions in it to build docker image. It is the automated way of
creating docker images. Once you build docker image, automatically docker file will be created. In
this file, we mention required OS image and all required soft wares in the form of instructions. Once
we build docker file, back end, docker container will be created and then docker image will be crated
from that container and that container will be destroyed automatically.

11 | Link to Join Our WhatsApp group


MANAS JHA Follow me on-LINKEDIN WhatsApp Link

58. Difference between docker and VM Ware?


VM Ware uses complete OS which contains GBs in size. But docker image size is MBs only. So it takes
less size. That’s why it takes less base machine resources. This docker image is compressed version
of OS. The second advantage of docker is, there is no pre-allocation of RAM. During run time, it takes
RAM as pre-requirement from base machine and one’s job is done, it releases RAM. But in VM Ware,
pre-allocation of RAM is there and it blocked whether it uses or not. So, need more RAM for base
machine if you want to use VM Ware unlike Docker.
59. What is OS-Lever Virtualization?
It is the unique feature of Docker which is not available in other virtualization soft wares. Docker
takes most of UNIX features form host machine OS and it only takes extra layers of required OS in the
form of docker image. So docker image contains only extra layers of required OS. For core UNIX
kernel, it depends upon host OS, why because UNIX kernel is same in any of the UNIX and Linux
flavours. In a simple term, docker takes host OS virtually. That’s why we call this concept as OS-Lever
Virtualization.
60. What is Layered file system/Union file system?
Inside docker container, wheat ever we do, that forms as a new layer. For instance, creating files,
directories, installing packages etc. This is what we call as layered file system. Each layer takes less
space. We can create docker image form this container. In that docker image also we get all these
layers and forms unity. That’s why we also call Union File System. If we create container out of docker
image, you can able to see all those files, directories and packages. This is what replication of same
environment.

61. What are the benefits of Docker?


• Containerization (OS level virtualization) (No need guest OS)
• No pre-allocation of RAM
• Can replicate same environment
• Less cost
• Less weight (MB’s in size)
• Fast to fire up
• Can run on physical/virtual/cloud
• Can re-use (same image)
• Can create containers in less time

62. List of Docker components?


Docker image: – Contains OS (very small) (almost negligible) + soft wares
Docker Container: – Container like a machine which is created from Docker image.
Docker file: – Describes steps to create a docker image.
Docker hub/registry: – Stores all docker images publicly.
Docker daemon: – Docker service runs at back end Above five components we call as Docker
components

63. What is Docker workflow?


First we create Docker file by mentioning instructions to build docker image. Form this Docker image,
we are going to create Docker container. This Docker image we can push to docker hub as well. This
image can be pulled by others to create docker containers. We can create docker images from docker
containers. Like this we can create Docker images form either docker file or docker containers. We
can create docker containers from docker images. This is the work flow of docker.

12 | Link to Join Our WhatsApp group


MANAS JHA Follow me on-LINKEDIN WhatsApp Link

64. Sample Docker file instructions?


FROM ubuntu WORKDIR /tmp RUN echo “Hello” > /tmp/testfile ENV myname user1 COPY testfile1
/tmp ADD test.tar.gz /tmp

65. What is the importance of volumes in Docker?


• Volume is a directory inside your container
• First declare directory as a volume and then share volume
• Even if we stop container, still we can access volume
• Volume will be created in one container
• You can share one volume across any no of containers
• Volume will not be included when you update an image
• Map volumes in two ways
• Share host – container
• Share container – container

66. What do you mean by port mapping in Docker?


Suppose if you want to make any container as web server by installing web package in it, you need
to provide containers IP address to public in order to access website which is running inside docker
container. But Docker containers don’t have an IP address. So, to address this issue, we have a
concept called
Docker port mapping. We map host port with container port and customers use public IP of host
machine. Then their request will be routed from host port to container’s port and will be loaded web
page which is running inside docker container. This is how we can access website which is running
inside container through port mapping.

67. What is Registry server in Docker?


Registry server is our own docker hub created to store private docker images instead of storing in
public Docker hub. Registry server is one of the docker containers. We create this Registry server
from “registry” image, especially provided by docker to create private docker hub. We can store any
no of private docker images in this Registry server. We can give access to others, so that, they also
can store their docker images whomever you provide access. Whenever we want, we can pull these
images and can create containers out of these images.

68. Important docker commands?


1. Docker ps (to see list of running containers)
2. Docker ps -a (to see list of all containers)
3. Docker images (to see list of all images)
4. Docker run (to create docker container)
5. Docker attach (to go inside container)
6. Docker stop (to stop container)
7. Docker start (to start container)
8. Docker commit (to create image out of docker file)
9. Docker rm (to delete container)
10. Docker rmi (to delete image)

13 | Link to Join Our WhatsApp group


MANAS JHA Follow me on-LINKEDIN WhatsApp Link

69. What is Ansible?


Ansible is one of the configuration Management Tools. It is a method through we automate system
admin tasks. Configuration refers to each and every minute details of a system. If we do any changes
in system means we are changing the configuration of a machine. That means we are changing the
configuration of the machine. All windows/Linux system administrators manage the configuration of
a machine manually. All DevOps engineers are managing this configuration automatic way by using
some tools which are available in the market. One such tool is Ansible. That’s why we call Ansible as
configuration management tool.

70. Working process of Ansible?


Here we crate file called playbook and inside playbook we write script in YAML format to create
infrastructure. Once we execute this playbook, automatically code will be converted into
Infrastructure. We call this process as IAC (Infrastructure as Code). We have open source and
enterprise editions of Ansible. Enterprise edition we call Ansible Tower.

71. The architecture of Ansible?


We create Ansible server by installing Ansible package in it. Python is pre-requisite to install ansible.
We need not to install ansible package in nodes. Because, communication establishes from server to
node through “ssh” client. By default all Linux machine will have “ssh” client. Server is going to push
the code to nodes that we write in playbooks. So Ansible follows pushing mechanism. 72. Ansible
components?
Server: – It is the place where we create playbooks and write code in YML format
Node: – It is the place where we apply code to create infrastructure. Server pushes code to nodes.
Ssh: – It is an agent through ansible server pushes code to nodes.
Setup: – It is a module in ansible which gathers nodes information.
Inventory file:- In this file we keep IP/DNS of nodes.

73. Disadvantages in other SCM (Source Code Management) tools?


• Huge overhead of Infrastructure setup
• Complicated setup
• Pull mechanism
• Lot of learning required

74. Advantages of Ansible over other SCM (Source Code Management) tools?
• Agentless
• Relies on “ssh”
• Uses python
• Push mechanism

75. How does Ansible work?


We give nodes IP addresses in hosts file by creating any group in ansible server why because, ansible
doesn’t recognize individual IP addresses of nodes. We create playbook and write code in YAML
script. The group name we have to mention in a playbook and then we execute the playbook. By
default, playbook will be executed in all those nodes which are under this group. This is how ansible
converts code into infrastructure.

14 | Link to Join Our WhatsApp group


MANAS JHA Follow me on-LINKEDIN WhatsApp Link

76. What do you mean by Ad-Hoc commands in Ansible?


These are simple one liner Linux commands we use to meet temporary requirements without actually
saving for later. Here we don’t use ansible modules. So there, Idempotency will not work with Ad-
Hoc commands. If at all we don’t get required YAML module to write to create infrastructure, then
we go for it. Without using playbooks, we can use these Ad-Hoc commands for temporary purpose.

77. Differences between Chef and Ansible?


• Ansible chef
• Playbook – Recipe
• Module – Resource
• Host – Node
• Setup – Ohai
• Ssh – Knife
• Push-Pull

78. What is Playbook in Ansible?


Playbook is a file where we write YAML script to create infrastructure in nodes. Here, we use modules
to create infrastructure. We create so many sections in playbook. We mention all modules in task
section. You can create any no of playbooks. There is no limit. Each playbook defines one scenario.
All sections begin with “-” & its attributes & parameters beneath it.

79. Mention some list of sections that we mention in Playbook?


1. Target section 2. Task section 3. Variable section 4. Handler section

80. What is Target section in Ansible playbook?


This is one of the important sections in Playbook. In this section, we mention the group name which
contains either IP addresses or Hostnames of nodes. When we execute playbook, then code will be
pushed too all nodes which are there in the group that we mention in Target section. We use “all”
key word to refer all groups.

81. What is Task section in Ansible playbook?


This is second most important section in playbook after target section. In this section, we are going
to mention list of all modules. All tasks we mention in this task section. We can mention any no of
modules in one playbook. There is no limit. If there is only one task, then instead of going with big
playbook, simply we can go with arbitrary command where we can use one module at a time. If more
than one module, then there is no option except going with big playbook.

82. What is Variable section?


In this section we are going to mention variables. Instead of hard coding, we can mention as variables
so that during runtime it pulls the actual value in place of key. We have this concept in each and every
programming language and scripting language. We use “vars” key word to use variables.

83. What is Handler section?


All tasks we mention in tasks section. But some tasks where dependency is there, we should not
mention in tasks section. That is not good practice. For example, installing package is one task and
starting service is one more task. But there is dependency between them. I.e. after installing package
only, we have to start service. Otherwise it throws error. These kind of tasks, we mention in handler

15 | Link to Join Our WhatsApp group


MANAS JHA Follow me on-LINKEDIN WhatsApp Link

section. In above example, package task we mention in task section and service task we mention in
handler section so that after installing task only service will be started.

84. What is Dry run in playbook?


Dry run is to test playbook. Before executing playbook in nodes, we can test whether the code in
playbook is written properly or not. Dry run won’t actually executes playbook, but it shows output as
if it executed playbook. Then by seeing the output, we can come to know whether the playbook is
written properly or not. It checks whether the playbook is formatted correctly or not. It tests how the
playbook is going to behave without running the tasks.

85. Why are we using loops concept in Ansible?


Sometimes we might need to deal with multiple tasks. For instance, Installing multiple packages,
Creating many users, creation many groups. Etc. In this case, mentioning module for every task is
complex process. So, to address this issue, we have a concept of loops. We have to use variables in
combination with loops.

86. Where do we use conditionals in Playbooks?


Sometimes, your nodes could be mixture of different flavours of Linux OS. Linux commands vary in
different Linux operating systems. In this case, we can’t execute common set of commands in all
machines, at the same time, we can’t execute different commands in each node separately. To
address this issue, we have conditionals concept where commands will be executed based up on
certain condition that we give.

87. What is Ansible vault?


Sometimes, we use sensitive information in playbooks like passwords, keys …etc. So any one can
open these playbooks and get to know about this sensitive information. So we have to protect our
playbooks from being read by others. So by using Ansible vault, we encrypt playbooks so that, those
who ever is having password, only those can read this information. It is the way of protecting
playbooks by encrypting them.

88. What do you mean by Roles in Ansible?


Adding more & more functionality to the playbooks will make it difficult to maintain in a single file.
To address this issue, we organize playbooks into a directory structure called “roles”. We create
separate file to each section and we just mention the names of those sections in playbook instead of
mentioning all modules in main playbook. When you call main playbook, main playbook will call all
sections files respectively in the order whatever order you mention in playbook. So, by using this
Roles, we can maintain small playbook without any complexity.

89. Write a sample playbook to install any package?


— # My First YAML playbook – hosts: demo user: ansible become: yes connection: ssh tasks: – name:
Install HTTPD on centos 7 action: yum name=httpd state=installed

90. Write a sample playbook by mentioning variables instead of hard coding?


— # My First YAML playbook – hosts: demo user: ansible become: yes connection: ssh vars: pkgname:
httpd tasks: – name: Install HTTPD server on centos 7 action: yum name=‘{{pkgname}}’
state=installed

16 | Link to Join Our WhatsApp group


MANAS JHA Follow me on-LINKEDIN WhatsApp Link

91. What is CI & CD?


CI means Continues Integration and CD means Continues Delivery/Deploy. Whenever developers
write code, we integrate all that code of all developers at that point of time and we build, test and
deliver/deploy to the client. This process we call CI & CD. Jenkins helps in achieving this. So instead
of doing night builds, build as and when commit occurs by integrating all code in SCM tool, build, test
and checking the quality of that code is what we call Continues Integration.

92. Key terminology that we use in Jenkins?


Integrate: Combine all code written by developers till some point of time.
Build: Compile the code and make a small executable package.
Test: Test in all environments whether application is working properly or not.
Archived: Stored in an antifactory so that in future we may use/deliver again.
Deliver: Handing the product to Client
Deploy: Installing product in client’s machines.

93. What is Jenkins Workflow?


We attach Git, Maven, Selenium & Artifactory plug-ins to Jenkins. Once Developers put the code in
Git, Jenkins pulls that code and send to Maven for build. Once build is done, Jenkins pulls that built
code and send to selenium for testing. Once testing is done, then Jenkins will pull that code and send
to Artifactory as per requirement and finally we can deliver the end product to client we call
Continues delivery. We can also deploy with Jenkins into clients machine directly as per the
requirement. This is what Jenkins work flow.

94. What are the ways through which we can do Continues Integration?
are total three ways through which we can do Continues Integration
1. Manually: – Manually write code, then do build manually and then test manually by writing test
cases and deploy manually into clients machine.
2. Scripts: – Can do above process by writing scripts so that these scripts do CI&CD automatically. But
here complexity is, writing script is not so easy.
3. Tool: – Using tools like Jenkins is very handy. Everything is preconfigured in these type of tools. So
less manual intervention. This is the most preferred way.

95. Benefits of CI?


1. Detects bugs as soon as possible, so that bug will be rectified fast and development happens fast.
2. Complete automation. No need manual intervention. 3. We can intervene manually whenever we
want. I.e. we can stop any stage at any point of time so have better control. 4. Can establish complete
and continues work flow.

96. Why only Jenkins?


• It has so many plug-ins.
• You can write your own plug-in
• You can use community plug-ins
• Jenkins is not just a tool. It is a framework. I.e. you can do what ever you want. All you need is plug-
ins.
• We can attach slaves (nodes) to Jenkins master. It instructs others (slaves) to do Job. If slaves are
not available,
• Jenkins itself does the job.

17 | Link to Join Our WhatsApp group


MANAS JHA Follow me on-LINKEDIN WhatsApp Link

• Jenkins also acts as crone server replacement. I.e. can do repeated tasks automatically
• Running some scripts regularly
• E.g.: Automatic daily alarm.
• Can create Labels (Group of slaves) (Can restrict where the project has to run)

97. What is Jenkins Architecture?


Jenkins architecture is Client-Server model. Where ever, we install Jenkins, we call that server is
Jenkins master. We can create slaves also in Jenkins, so that, server load will be distributed to slaves.
Jenkins master randomly assigns tasks to slaves. But if you want to restrict any job to run in particular
slave, then we can do it so that, that particular job will be executed in that slave only. We can group
some slaves by using “Label”

98. How to install Jenkins?


• You can install Jenkins in any OS. All OSs supports Jenkins. We access Jenkins through web page
only. That’s why it doesn’t make any difference whether you install Jenkins in Windows or Linux.
• Choose Long Term Support release version, so that you will get support from Jenkins community.
If you are using Jenkins for testing purpose, you can choose weekly release. But for production
environments, we prefer Long Term Support release version.
• Need to install JAVA. Java is pre-requisite to install Jenkins.
• Need to install web package. Because, we are going to access Jenkins through web page only.

99. Does Jenkins open source?


There are two editions in Jenkins 1. Open source 2. Enterprise edition Open source edition we call
Jenkins. Here we get support from community if we need it. Enterprise edition we call Hudson. Here
Jenkins company will provide support.

100. How many types of configurations in Jenkins?


There are total 3 types of configurations in Jenkins.
1. Global: – Here, whatever configuration changes we do, applicable to whole Jenkins including jobs
as well as nodes. This configuration has high priority.
2. Job: – These configurations applicable to only Jobs. Jobs also we call as projects or items in Jenkins.
3. Node: – These configurations applicable to only nodes. Also we call Slaves. These are kind of
helpers to Jenkins master to distribute the excessive load.

101. What do you mean by workspace in Jenkins?


The workspace is the location on your computer where Jenkins places all files related to the Jenkins
project. By default each project or job is assigned a workspace location and it contains Jenkins-specific
project metadata, temporary files like logs and any build artefacts, including transient build files.
Jenkins web page acts like a window through which we are actually doing work in workspace.

102. List of Jenkins services?


• localhost:8080/restart (to restart Jenkins)
• localhost:8080/stop (to stop Jenkins)
• localhost:8080/start (to start Jenkins)

18 | Link to Join Our WhatsApp group


MANAS JHA Follow me on-LINKEDIN WhatsApp Link

103. How to create a free style project in Jenkins?


• Create project by giving any name
• Select Free style project
• Click on build
• Select execute windows batch command
• Give any command (echo “Hello Dear Students!!”)
• Select Save
• Click on Build now
• Finally can see Console output

104. What do you mean by Plugins in Jenkins?


• With Jenkins, nearly everything is a plugin and that nearly all functionality is provided by plugins.
You can think of Jenkins as little more than an executor of plugins.
• Plugins are small libraries that add new abilities to Jenkins and can provide integration points to
other tools.
• Since nearly everything Jenkins does is because of a plugin, Jenkins ships with a small set of default
plugins, some of which can be upgraded independently of Jenkins

105. How to create Maven Project?


• Select new item
• Copy the git hub maven project link and paste in git section in Jenkins
• Select build
• Click on clean package
• Select save
• Click on Build now
• Verify workspace contents with GitHub side See console output

106. How can we Schedule projects?


Sometimes, we might need some jobs to be executed after frequent intervals. To schedule a job,
• Click on any project
• Click on Configure
• Select on Build triggers
• Click on Build periodically
• Give timing (* * * * *)
• Select Save
• Can see automatic builds every 1 min
• You can manually trigger build as well if you want

107. What do you mean by Upstream and Downstream projects?


We can also call them as linked projects. These are the ways through which, we connect jobs one
with other. In Upstream jobs, first job will trigger second job after build is over. In Downstream jobs,
second job will wait till first job finishes its build. As and when first job finishes its work, then second
job will be triggered automatically. In Upstream, first job will be active. In Downstream jobs, second
job will be active. We can use any one type to link multiple jobs.

19 | Link to Join Our WhatsApp group


MANAS JHA Follow me on-LINKEDIN WhatsApp Link

108. What is view in Jenkins?


We can customize view as per our needs. We can modify Jenkins home page. We can segregate jobs
as per the type of jobs like free style jobs and maven jobs and so on. To create custom view
• Select List of Related Projects
• Select Default views
• Click on All
• Click on + and select Freestyle
• Select List Views
• Select Job filter
• Select required jobs to be segregated
• Now, you can see different view

109. What is User Administration in Jenkins?


In Jenkins, we can create users, groups and can assign limited privileges to them so that, we can have
better control on Jenkins. Users will not install Jenkins in their machines. They access Jenkins as a
user. Here we can’t assign permissions directly to users. Instead we create “Roles” and assign
permissions to those roles. These roles we attach to users so that users get the permissions whatever
we assign to those roles.

110. What is Global tool configuration in Jenkins?


We install Java, Maven, Git and many other tools in our server. Whenever Jenkins need those tools,
by default Jenkins will install them automatically every time. But it’s not a good practice. That’s why
we give installed path of all these tools in Jenkins so that whenever Jenkins need them, automatically
Jenkins pull them form local machine instead of downloading every time. This way of giving path of
these tools in Jenkins we call “Global tool configuration”

111. What is Build?


Build means, Compile the source code, assembling of all class files and finally creating deliverable
Compile: – Convert Source code into machine-readable format
Assembly (Linking): – Grouping all class files
Deliverable: – .war, .jar The above process is same for any type of code. This process we call Build.

112. What is Maven?


Maven is one of the Build tools. It is the most advance build tool in the market. In this, everything is
already pre-configured. Maven belongs to Apache Company. We use maven to build Java code only.
We can’t build other codes by using Maven. By default, we get so many plugins with Maven. You can
write your own plug-in as well. Maven’s local repository is “.M2” where we can get required
compilers and dependencies. Maven’s main configuration file is “pom.xml” where we keep all
instructions to build.

113. Advantages of Maven?


• Automated tasks (Mention all in pom.xml)
• Multiple Tasks at a time
• Quality product
• Minimize bad builds
• Keep history
• Save time – Save money

20 | Link to Join Our WhatsApp group


MANAS JHA Follow me on-LINKEDIN WhatsApp Link

• Gives set of standards


• Gives define project life cycle (Goals)
• Manage all dependencies
• Uniformity in all projects
• Re-usability

114. List of Build tools available in Market?


• C and C++: Make file
• .Net: Visual studio
• Java: Ant, Maven

115. What is the architecture of Maven?


Main configuration file is pom.xml. For one project, there will be one workspace and one pom.xml
Requirements for build: –
• Source code (Will be pulled from Git hub)
• Compiler (Pulls from remote repo and then put them in local repo, from there, maven pulls into
Workspace)
• Dependencies (Pulls from remote repo and then put them in local repo, from there, maven pulls
into Workspace)

116. What is Maven’s Build Life Cycle?


In maven, we have different goals. These are
• Generate resources (Dependencies)
• Compile code
• Unit test
• Package (Build)
• Install (in to local repo & artifactory)
• Deploy (to servers)
• Clean (delete all run time files)

117. What does POM.XML contains?


POM.XML is maven’s main configuration file where we keep all details related to project. It contains
• Metadata about that project
• Dependencies required to build the project
• The kind of project
• Kind of output you want (.jar, .war)
• Description about that project

118. What is Multi-Module Project in Maven?


• Dividing big project into small modules, we call Multi Module Project.
• Each module must have its own SRC folder & pom.xml so that build will happen separately
• To build all modules with one command, there should be a parent pom.xml file. This calls all child
pom.xml files automatically
• In parent pom.xml file, need to mention the child pom.xml files in an order.

21 | Link to Join Our WhatsApp group


MANAS JHA Follow me on-LINKEDIN WhatsApp Link

119. What is Nagios?


Nagios is one of the monitoring tools. By using Nagios we can monitor and give alerts. Where ever
you install Nagios that becomes Nagios server. Monitoring is important, because we need to make
sure that our servers should never go down. If at all in some exceptional cases server goes down,
immediately we need alert in the form of intimation so that we can take required action to bring the
server up immediately. So for this purpose, we use Nagios.

120. Why do we have to use Nagios?


There are many advantages in using Nagios
• It is oldest & Latest (every now and then, it is getting upgraded as per current market requirements)
• Stable (we have been using this since so many years and it is performing well)
• By default, we get so many Plug-ins
• It is having its own Database
• Nagios is both Monitoring & Alerting tool.

121. How does Nagios works?


• We mention all details in configuration files what data to be collected from which machine
• Nagios daemon reads those details about what data to be collected
• Daemon use NRPE (Nagios Remote Plug-in Executer) plug-in to collect data form nodes and stores
in its own database
• Finally displays in Nagios dashboard

122. What is the Directory structure of Nagios?


/usr/local/nagios/bin – binary files /usr/local/nagios/sbin – CGI files (to get web page)
/usr/local/nagios/libexec – plugins /usr/local/nagios/share – PHP Files /usr/local/nagios/etc –
configuration files /usr/local/nagios/var – logs /usr/local/nagios/var/status.dat(file) – database

123. What are the Important Configuration files in Nagios?


Nagios main configuration file is /usr/local/nagios/etc/nagios.cfg
/usr/local/nagios/etc/objects/localhost.cfg (where we keep hosts information)
/usr/local/nagios/etc/objects/contacts.cfg (whom to be informed (emails))
/usr/local/nagios/etc/objects/timeperiods.cfg (at what time to monitor)
/usr/local/nagios/etc/objects/commands.cfg (plugins to use)
/usr/local/nagios/etc/objects/templates.cfg (sample templates)

22 | Link to Join Our WhatsApp group


DevOps Interview Questions
codespaghetti.com/devops-interview-questions/

DevOps
DevOps Interview Questions, From beginner to expert level DevOps Professional. These
questions covers a wide range of topics any DevOps professional needed to know to nail
an interview.

Table of Contents:

CHAPTER 1: DevOps Introduction

CHAPTER 2: Gradle Interview Questions

CHAPTER 3: Groovy Interview Questions

CHAPTER 4: Maven Interview Questions

CHAPTER 5: Linux Interview Questions


1/71
CHAPTER 6: GIT Interview Questions

CHAPTER 7: Continuous Integration Interview Questions

CHAPTER 8: Amazon AWS Interview Questions

CHAPTER 9: Splunk Interview Questions

CHAPTER 10: Log4J Interview Questions

CHAPTER 11: Docker Interview Questions

CHAPTER 12: VmWare Interview Questions

CHAPTER 13: DevOps Security Interview Questions

CHAPTER 14: DevOps Testing Interview Questions

CHAPTER 15: Summary

CHAPTER 16: DevOps Interview Questions PDF

Question: What Are Benefits Of DevOps?

DevOps is gaining more popularity day by day. Here are some benefits of implementing
DevOps Practice.

Release Velocity: DevOps enable organizations to achieve a great release velocity. We


can release code to production more often and without any hectic problems.

Development Cycle: DevOps shortens the development cycle from initial design to
production.

Full Automation: DevOps helps to achieve full automation from testing, to build, release
and deployment.

Deployment Rollback: In DevOps, we plan for any failure in deployment rollback due to a
bug in code or issue in production. This gives confidence in releasing feature without
worrying about downtime for rollback.

Defect Detection: With DevOps approach, we can catch defects much earlier than
releasing to production. It improves the quality of the software.

Collaboration: With DevOps, collaboration between development and operations


professionals increases.

Performance-oriented: With DevOps, organization follows performance-oriented culture in


2/71
which teams become more productive and more innovative.

Question: What Is The Typical DevOps workflow?

The typical DevOps workflow is as follows:

Atlassian Jira for writing requirements and tracking tasks.


Based on the Jira tasks, developers checking code into GIT version control system.
The code checked into GIT is built by using Apache Maven.
The build process is automated with Jenkins.
During the build process, automated tests run to validate the code checked in by a
developer.
Code built on Jenkins is sent to organization’s Artifactory.
Jenkins automatically picks the libraries from Artifactory and deploys it to Production.
During Production deployment, Docker images are used to deploy same code on
multiple hosts.
Once a code is deployed to Production, we use monitoring tools like ngios are
used to check the health of production servers.
Splunk based alerts inform the admins of any issues or exceptions in production.

Question: DevOps Vs Agile?

Agile is a set of values and principles about how to develop software in a systematic way.

Where as DevOPs is a way to quickly, easily and repeatably move that software into
production infrastructure, in a safe and simple way.

In oder to achieve that we use a set of DevOps tools and techniques.

Question: What Is The Most Important Thing DevOps


Helps Us To Achieve?

Most important aspect of DevOps is to get the changes into production as quickly as
possible while minimizing risks in software quality assurance and compliance. This is the
primary objective of DevOps.

Question: What Are Some DevOps tools.

Here is a list of some most important DevOps tools


Git
Jenkins, Bamboo
Selenium
3/71
Puppet, BitBucket
Chef
Ansible, Artifactory
Nagios
Docker
Monit
ELK –Elasticsearch, Logstash, Kibana
Collectd/Collect

Question: How To Deploy Software?

Code is deployed by adopting continuous delivery best practices. Which means that
checked in code is built automatically and then artifacts are published to repository servers.

On the application severs there are deployment triggers usually timed by using cron jobs.
All the artifacts are then downloaded and deployed automatically.

Gradle DevOps Interview Questions

Question: What is Gradle?

4/71
Gradle is an open-source build automation system that builds upon the concepts of Apache
Ant and Apache Maven. Gradle has a proper programming language instead of XML
configuration file and the language is called ‘Groovy’.

Gradle uses a directed acyclic graph ("DAG") to determine the order in which tasks can be
run.

Gradle was designed for multi-project builds, which can grow to be quite large. It supports
incremental builds by intelligently determining which parts of the build tree are up to date,
any task dependent only on those parts does not need to be re-executed.

Question: What Are Advantages of Gradle?

Gradle provides many advantages and here is a list

Declarative Builds: Probably one of the biggest advantage of Gradle is Groovy


language. Gradle provides declarative language elements. Which providea build-by-
convention support for Java, Groovy, Web and Scala.
Structured Build: Gradle allows developers to apply common design principles to
their build. It provides a perfect structure for build, so that well-structured and easily
maintained, comprehensible build structures can be built.
Deep API: Using this API, developers can monitor and customize its configuration
and execution behaviors.
Scalability: Gradle can easily increase productivity, from simple and single project
builds to huge enterprise multi-project builds.
Multi-project builds: Gradle supports multi-project builds and also partial builds.
Build management: Gradle supports different strategies to manage project
dependencies.
First build integration tool − Gradle completely supports ANT tasks, Maven and lvy
repository infrastructure for publishing and retrieving dependencies. It also provides a
converter for turning a Maven pom.xml to Gradle script.
Ease of migration: Gradle can easily adapt to any project structure.
Gradle Wrapper: Gradle Wrapper allows developers to execute Gradle builds on
machines where Gradle is not installed. This is useful for continuous integration of
servers.
Free open source − Gradle is an open source project, and licensed under the
Apache Software License (ASL).
Groovy: Gradle's build scripts are written in Groovy, not XML. But unlike other
approaches this is not for simply exposing the raw scripting power of a dynamic
language. The whole design of Gradle is oriented towards being used as a language,
not as a rigid framework.

Question: Why Gradle Is Preferred Over Maven or Ant?

5/71
There isn't a great support for multi-project builds in Ant and Maven. Developers end up
doing a lot of coding to support multi-project builds.

Also having some build-by-convention is nice and makes build scripts more concise. With
Maven, it takes build by convention too far, and customizing your build process becomes a
hack.

Maven also promotes every project publishing an artifact. Maven does not support
subprojects to be built and versioned together.

But with Gradle developers can have the flexibility of Ant and build by convention of
Maven.

Groovy is easier and clean to code than XML. In Gradle, developers can define
dependencies between projects on the local file system without the need to publish
artifacts to repository.

Question: Gradle Vs Maven

The following is a summary of the major differences between Gradle and Apache Maven:

Flexibility: Google chose Gradle as the official build tool for Android; not because build
scripts are code, but because Gradle is modeled in a way that is extensible in the most
fundamental ways.

Both Gradle and Maven provide convention over configuration. However, Maven provides a
very rigid model that makes customization tedious and sometimes impossible.

While this can make it easier to understand any given Maven build, it also makes it
unsuitable for many automation problems. Gradle, on the other hand, is built with an
empowered and responsible user in mind.

Performance
Both Gradle and Maven employ some form of parallel project building and parallel
dependency resolution. The biggest differences are Gradle's mechanisms for work
avoidance and incrementally. Following features make Gradle much faster than Maven:

Incrementally:Gradle avoids work by tracking input and output of tasks and only
running what is necessary.
Build Cache:Reuses the build outputs of any other Gradle build with the same
inputs.
Gradle Daemon:A long-lived process that keeps build information "hot" in memory.

User Experience
Maven's has a very good support for various IDE's. Gradle's IDE support continues to
improve quickly but is not great as of Maven.
6/71
Although IDEs are important, a large number of users prefer to execute build operations
through a command-line interface. Gradle provides a modern CLI that has discoverability
features like `gradle tasks`, as well as improved logging and command-line completion.

Dependency Management
Both build systems provide built-in capability to resolve dependencies from configurable
repositories. Both are able to cache dependencies locally and download them in parallel.

As a library consumer, Maven allows one to override a dependency, but only by version.
Gradle provides customizable dependency selection and substitution rules that can be
declared once and handle unwanted dependencies project-wide. This substitution
mechanism enables Gradle to build multiple source projects together to create composite
builds.

Maven has few, built-in dependency scopes, which forces awkward module architectures in
common scenarios like using test fixtures or code generation. There is no separation
between unit and integration tests, for example. Gradle allows custom dependency scopes,
which provides better-modeled and faster builds.

Question: What are Gradle Build Scripts?

Gradle builds a script file for handling projects and tasks. Every Gradle build represents one
or more projects.

A project represents a library JAR or a web application.

Question: What is Gradle Wrapper?

The wrapper is a batch script on Windows, and a shell script for other operating systems.
Gradle Wrapper is the preferred way of starting a Gradle build.

When a Gradle build is started via the wrapper, Gradle will automatically download and run
the build.

Question: What is Gradle Build Script File Name?

This type of name is written in the format that is build.gradle. It generally configures the
Gradle scripting language.

Question: How To Add Dependencies In Gradle?

In order to make sure that dependency for your project is added, you need to mention the
7/71
configuration dependency like compiling the block dependencies of the build.gradle file.

Question: What Is Dependency Configuration?

Dependency configuration comprises of the external dependency, which you need to install
well and make sure the downloading is done from the web. There are some key features of
this configuration which are:

1. Compilation: The project which you would be starting and working on the first needs
to be well compiled and ensure that it is maintained in the good condition.
2. Runtime: It is the desired time which is required to get the work dependency in the
form of collection.
3. Test Compile: The dependencies check source requires the collection to be made
for running the project.
4. Test runtime: This is the final process which needs the checking to be done for
running the test that is in a default manner considered to be the mode of runtime

Question: What Is Gradle Daemon?

A daemon is a computer program that runs as a background process, rather


than being under the direct control of an interactive user.

Gradle runs on the Java Virtual Machine (JVM) and uses several supporting
libraries that require a non-trivial initialization time.

As a result, it can sometimes seem a little slow to start. The solution to this
problem is the Gradle Daemon: a long-lived background process that
executes your builds much more quickly than would otherwise be the case.

We accomplish this by avoiding the expensive bootstrapping process as


well as leveraging caching, by keeping data about your project in memory.
Running Gradle builds with the Daemon is no different than without

Question: What Is Dependency Management in Gradle?

Software projects rarely work in isolation. In most cases, a project relies on reusable
functionality in the form of libraries or is broken up into individual components to compose a
modularized system.

Dependency management is a technique for declaring, resolving and using dependencies


required by the project in an automated fashion.
8/71
Gradle has built-in support for dependency management and lives up the task of fulfilling
typical scenarios encountered in modern software projects.

Question: What Are Benefits Of Daemon in Gradle 3.0

Here are some of the benefits of Gradle daemon

1. It has good UX
2. It is very powerful
3. It is aware of the resource
4. It is well integrated with the Gradle Build scans
5. It has been default enabled

Question: What Is Gradle Multi-Project Build?

Multi-project builds helps with modularization. It allows a person to concentrate on one area
of work in a larger project, while Gradle takes care of dependencies from other parts of the
project

A multi-project build in Gradle consists of one root project, and one or more subprojects
that may also have subprojects.

While each subproject could configure itself in complete isolation of the other subprojects, it
is common that subprojects share common traits.

It is then usually preferable to share configurations among projects, so the same


configuration affects several subprojects.

Question: What Is Gradle Build Task?

Gradle Build Tasks is made up of one or more projects and a project represents what is
been done with Gradle.

Some key of features of Gradle Build Tasks are:

1. Task has life cycled methods [do first, do last]


2. Build Scripts are code
3. Default tasks like run, clean etc
4. Task dependencies can be defined using properties like dependsOn

Question: What is Gradle Build Life Cycle?

9/71
Gradle Build life cycle consists of following three steps

-Initialization phase: In this phase the project layer or objects are organized

-Configuration phase: In this phase all the tasks are available for the current build and a
dependency graph is created

-Execution phase: In this phase tasks are executed.

Question: What is Gradle Java Plugin?

The Java plugin adds Java compilation along with testing and bundling capabilities to the
project. It is introduced in the way of a SourceSet which act as a group of source files
complied and executed together.

Question: What is Dependency Configuration?

A set of dependencies is termed as dependency configuration, which contains some


external dependencies for download and installation.

Here are some key features of dependency configuration are:

Compile:

The project must be able to compile together

Runtime:

It is the required time needed to get the dependency work in the collection.

Test Compile:

The check source of the dependencies is to be collected in order to run the project.

Test Runtime:

The final procedure is to check and run the test which is by default act as a runtime mode.

Groovy DevOps Interview Questions

10/71
Question: What is Groovy?

Apache Groovy is a object-oriented programming language for the Java platform.

It is both a static and dynamic language with features similar to those of Python, Ruby, Perl,
and Smalltalk.

It can be used as both a programming language and a scripting language for the Java
Platform, is compiled to Java virtual machine (JVM) bytecode, and interoperates
seamlessly with other Java code and libraries.

Groovy uses a curly-bracket syntax similar to Java. Groovy supports closures, multiline
strings, and expressions embedded in strings.

And much of Groovy's power lies in its ASTtransformations, triggered through annotations.

Question: Why Groovy Is Gaining Popularity?

Here are few reasons for popularity of Groovy

Familiar OOP language syntax.


Extensive stock of various Java libraries
11/71
Increased expressivity (type less to do more)
Dynamic typing (lets you code more quickly, at least initially)
Closures
Native associative array/key-value mapping support (you can create an associative
array literal)
String interpolation (cleaner creation of strings displaying values)
Regex's being first class citizens

Question: What Is Meant By Thin Documentation In Groovy

Groovy is documented very badly. In fact the core documentation of Groovy is limitedand
there is no information regarding the complex and run-time errors that happen.

Developers are largely on there own and they normally have to figure out the explanations
about internal workings by themselves.

Question: How To Run Shell Commands in Groovy?

Groovy adds the execute method to String to make executing shells fairly easy

println "ls".execute().text

Question: In How Many Platforms you can use Groovy?

These are the infrastructure components where we can use groovy:

-Application Servers

-Servlet Containers

-Databases with JDBC drivers

-All other Java-based platforms

Question: Can Groovy Integrate With Non Java Based


Languages?

It is possible but in this case the features are limited. Groovy cannot be made to handle all
the tasks in a manner it has to.

Question: What are Pre-Requirements For Groovy?


12/71
Installing and using Groovy is easy. Groovy does not have complex system requirements. It
is OS independent.

Groovy can perform optimally in every situation.There are many Java based components in
Groovy,which make it even more easier to work with Java applications.

Questions: What Is Closure In Groovy?

A closure in Groovy is an open, anonymous, block of code that can take arguments, return
a value and be assigned to a variable. A closure may reference variables declared in its
surrounding scope. In opposition to the formal definition of a closure, Closure in the
Groovy language can also contain free variables which are defined outside of its
surrounding scope.

A closure definition follows this syntax:

{ [closureParameters -> ] statements }

Where [closureParameters->] is an optional comma-delimited list of parameters, and


statements are 0 or more Groovy statements. The parameters look similar to a method
parameter list, and these parameters may be typed or untyped.

When a parameter list is specified, the -> character is required and serves to separate the
arguments from the closure body. The statements portion consists of 0, 1, or many Groovy
statements.

Question: What is ExpandoMeta Class In Groovy?

Through this class programmers can add properties, constructors, methods and operations
in the task. It is a powerful option available in the Groovy.

By default this class cannot be inherited and users need to call explicitly. The command for
this is “ExpandoMetaClass.enableGlobally()”.

Question: What Are Limitations Of Groovy?

Groovy has some limitations. They are described below

It can be slower than the other object-oriented programming languages.


It might need memory more than that required by other languages.
The start-up time of groovy requires improvement. It is not that frequent.
For using groovy, you need to have enough knowledge of Java. Knowledge of Java
is important because half of groovy is based on Java.

13/71
It might take you some time to get used to the usual syntax and default typing.
It consists of thin documentation.

Question: How To Write HelloWorld Program In Groovy

The following is a basic Hello World program written in Groovy:

class Test {

static void main(String[] args) {

println('Hello World');

Question: How To Declare String In Groovy?

In Groovy, the following steps are needed to declare a string.

The string is closed with single and double qotes.


It contains Groovy Expressions noted in ${}
Square bracket syntax may be applied like charAt(i)

Question: Differences Between Java And Groovy?

Groovy tries to be as natural as possible for Java developers. Here are all the major
differences between Java and Groovy.

-Default imports
In Groovy all these packages and classes are imported by default, i.e. Developers do not
have to use an explicit import statement to use them:
java.io.*
java.lang.*
java.math.BigDecimal
java.math.BigInteger
java.net.*
java.util.*
groovy.lang.*
groovy.util.*

-Multi-methods
14/71
In Groovy, the methods which will be invoked are chosen at runtime. This is called runtime
dispatch or multi-methods. It means that the method will be chosen based on the types of
the arguments at runtime. In Java, this is the opposite: methods are chosen at compile
time, based on the declared types.

-Array initializers
In Groovy, the { … } block is reserved for closures. That means that you cannot create
array literals with this syntax:

int[] arraySyntex = { 6, 3, 1}

You actually have to use:

int[] arraySyntex = [1,2,3]

-ARM blocks
ARM (Automatic Resource Management) block from Java 7 are not supported in Groovy.
Instead, Groovy provides various methods relying on closures, which have the same effect
while being more idiomatic.

-GStrings
As double-quoted string literals are interpreted as GString values, Groovy may fail with
compile error or produce subtly different code if a class with String literal containing a
dollar character is compiled with Groovy and Java compiler.
While typically, Groovy will auto-cast between GString and String if an API declares
the type of a parameter, beware of Java APIs that accept an Object parameter and then
check the actual type.

-String and Character literals


Singly-quoted literals in Groovy are used for String , and double-quoted result
in String or GString , depending whether there is interpolation in the literal.

assert 'c'.getClass()==String
assert "c".getClass()==String
assert "c${1}".getClass() in GString

Groovy will automatically cast a single-character String to char only when assigning to
a variable of type char . When calling methods with arguments of type char we need to
either cast explicitly or make sure the value has been cast in advance.

char a='a'
assert Character.digit(a, 16)==10 : 'But Groovy does boxing'
assert Character.digit((char) 'a', 16)==10

try {
assert Character.digit('a', 16)==10
assert false: 'Need explicit cast'
15/71
} catch(MissingMethodException e) {
}

Groovy supports two styles of casting and in the case of casting to char there are subtle
differences when casting a multi-char strings. The Groovy style cast is more lenient and will
take the first character, while the C-style cast will fail with exception.

// for single char strings, both are the same


assert ((char) "c").class==Character
assert ("c" as char).class==Character

// for multi char strings they are not


try {
((char) 'cx') == 'c'
assert false: 'will fail - not castable'
} catch(GroovyCastException e) {
}
assert ('cx' as char) == 'c'
assert 'cx'.asType(char) == 'c'

-Behaviour of ==
In Java == means equality of primitive types or identity for objects. In
Groovy == translates to a.compareTo(b)==0 , if they are Comparable ,
and a.equals(b) otherwise. To check for identity, there is is . E.g. a.is(b) .

Question: How To Test Groovy Application?

The Groovy programming language comes with great support for writing tests. In addition
to the language features and test integration with state-of-the-art testing libraries and
frameworks.

The Groovy ecosystem has born a rich set of testing libraries and frameworks.

Groovy Provides following testing capabilities

Junit Integrations

Spock for specifications

Geb for Functional Test

Groovy also has excellent built-in support for a range of mocking and stubbing alternatives.
When using Java, dynamic mocking frameworks are very popular.

A key reason for this is that it is hard work creating custom hand-crafted mocks using Java.
Such frameworks can be used easily with Groovy.

Question: What Are Power Assertions In Groovy?

16/71
Writing tests means formulating assumptions by using assertions. In Java this can be done
by using the assert keyword. But Groovy comes with a powerful variant of assert also
known as power assertion statement.
Groovy’s power assert differs from the Java version in its output given the boolean
expression validates to false :

def x = 1
assert x == 2

// Output:
//
// Assertion failed:
// assert x == 2
// | |
// 1 false

This section shows the std-err output

The java.lang.AssertionError that is thrown whenever the assertion can not be


validated successfully, contains an extended version of the original exception message.
The power assertion output shows evaluation results from the outer to the inner expression.
The power assertion statements true power unleashes in complex Boolean statements, or
statements with collections or other toString -enabled classes:

def x = [1,2,3,4,5]
assert (x << 6) == [6,7,8,9,10]

// Output:
//
// Assertion failed:
// assert (x << 6) == [6,7,8,9,10]
// | | |
// | | false
// | [1, 2, 3, 4, 5, 6]
// [1, 2, 3, 4, 5, 6]

Question: Can We Use Design Patterns In Groovy?

Design patterns can also be used with Groovy. Here are important points
Some patterns carry over directly (and can make use of normal Groovy syntax
improvements for greater readability)
Some patterns are no longer required because they are built right into the language
or because Groovy supports a better way of achieving the intent of the pattern
some patterns that have to be expressed at the design level in other languages can
be implemented directly in Groovy (due to the way Groovy can blur the distinction
between design and implementation)

Question: How To Parse And Produce JSON Object In Groovy?

17/71
Groovy comes with integrated support for converting between Groovy objects and JSON.
The classes dedicated to JSON serialisation and parsing are found in
the groovy.json package.
JsonSlurper is a class that parses JSON text or reader content into Groovy data
structures (objects) such as maps, lists and primitive types
like Integer , Double , Boolean and String .

The class comes with a bunch of overloaded parse methods plus some special methods
such as parseText , parseFile and others

Question: What Is Difference Between XmlParser


And XmlSluper?

XmlParser and XmlSluper are used for parsing XML with Groovy. Both have the same
approach to parse an xml.

Both come with a bunch of overloaded parse methods plus some special methods such
as parseText , parseFile and others.

XmlSlurper

def text = '''


<list>
<technology>
<name>Groovy</name>
</technology>
</list>
'''

def list = new XmlSlurper().parseText(text)

assert list instanceof groovy.util.slurpersupport.GPathResult


assert list.technology.name == 'Groovy'

Parsing the XML an returning the root node as a GPathResult

Checking we’re using a GPathResult

Traversing the tree in a GPath style

XmlParser

18/71
def text = '''
<list>
<technology>
<name>Groovy</name>
</technology>
</list>
'''

def list = new XmlParser().parseText(text)

assert list instanceof groovy.util.Node


assert list.technology.name.text() == 'Groovy'

Parsing the XML an returning the root node as a Node

Checking we’re using a Node

Traversing the tree in a GPath style

Let’s see the similarities between XMLParser and XMLSlurper first:


Both are based on SAX so they both are low memory footprint
Both can update/transform the XML

But they have key differences:


XmlSlurper evaluates the structure lazily. So if you update the xml you’ll have to
evaluate the whole tree again.
XmlSlurper returns GPathResult instances when parsing XML
XmlParser returns Node objects when parsing XML

When to use one or the another?


If you want to transform an existing document to another then XmlSlurper will be
the choice
If you want to update and read at the same time then XmlParser is the choice.

Maven DevOps Interview Questions

19/71
Question: What is Maven?

Maven is a build automation tool used primarily for Java projects. Maven addresses two
aspects of building software:

First: It describes how software is built

Second: It describes its dependencies.

Unlike earlier tools like Apache Ant, it uses conventions for the build procedure, and only
exceptions need to be written down.

An XML file describes the software project being built, its dependencies on other external
modules and components, the build order, directories, and required plug-ins.

It comes with pre-defined targets for performing certain well-defined tasks such as
compilation of code and its packaging.

Maven dynamically downloads Java libraries and Maven plug-ins from one or more
repositories such as the Maven 2 Central Repository, and stores them in a local cache.

This local cache of downloaded artifacts can also be updated with artifacts created by local
projects. Public repositories can also be updated.

20/71
Question: What Are Benefits Of Maven?

One of the biggest benefit of Maven is that its design regards all projects as having a
certain structure and a set of supported task work-flows.
Maven has quick project setup, no complicated build.xml files, just a POM and go
All developers in a project use the same jar dependencies due to centralized POM.
In Maven getting a number of reports and metrics for a project "for free"
It reduces the size of source distributions, because jars can be pulled from a central
location
Maven lets developers get your package dependencies easily
With Maven there is no need to add jar files manually to the class path

Question: What Are Build Life cycles In Maven?

Build lifecycle is a list of named phases that can be used to give order to goal execution.
One of Maven's standard life cycles is the default lifecycle, which includes the following
phases, in this order

1 validate
2 generate-sources
3 process-sources
4 generate-resources
5 process-resources
6 compile
7 process-test-sources
8 process-test-resources
9 test-compile
10 test
11 package
12 install
13 deploy

Question: What Is Meant By Build Tool?

Build tools are programs that automate the creation of executable applications from source
code. Building incorporates compiling, linking and packaging the code into a usable or
executable form.

In small projects, developers will often manually invoke the build process. This is not
practical for larger projects.

Where it is very hard to keep track of what needs to be built, in what sequence and what
dependencies there are in the building process. Using an automation tool like Maven,
Gradle or ANT allows the build process to be more consistent.

21/71
Question: What Is Dependency Management Mechanism In
Gradle?

Maven's dependency-handling mechanism is organized around a coordinate system


identifying individual artifacts such as software libraries or modules.

For example if a project needs Hibernate library. It has to simply declare Hibernate's
project coordinates in its POM.

Maven will automatically download the dependency and the dependencies that Hibernate
itself needs and store them in the user's local repository.

Maven 2 Central Repository is used by default to search for libraries, but developers can
configure the custom repositories to be used (e.g., company-private repositories) within the
POM.

Question: What Is Central Repository Search Engine?

The Central Repository Search Engine, can be used to find out coordinates for different
open-source libraries and frameworks.

Question: What are Plugins In Maven?

Most of Maven's functionality is in plugins. A plugin provides a set of goals that can be
executed using the following syntax:

mvn [plugin-name]:[goal-name]

For example, a Java project can be compiled with the compiler-plugin's compile-goal by
running mvn compiler:compile . There are Maven plugins for building, testing, source
control management, running a web server, generating Eclipse project files, and much
more. Plugins are introduced and configured in a <plugins>-section of a pom.xml file.
Some basic plugins are included in every project by default, and they have sensible default
settings.

Questions: What Is Difference Between Maven And ANT?

Ant Maven

Ant is a tool box. Maven is a framework.

There is no life cycle. There is life cycle.

22/71
Ant doesn't have formal Maven has a convention to place source code, compiled code
conventions. etc.

Ant is procedural. Maven is declarative.

The ant scripts are not reusable. The maven plugins are reusable.

Question: What is POM In Maven?

A Project Object Model (POM) provides all the configuration for a single project. General
configuration covers the project's name, its owner and its dependencies on other projects.

One can also configure individual phases of the build process, which are implemented
as plugins.

For example, one can configure the compiler-plugin to use Java version 1.5 for compilation,
or specify packaging the project even if some unit tests fail.

Larger projects should be divided into several modules, or sub-projects, each with its own
POM. One can then write a root POM through which one can compile all the modules with a
single command. POMs can also inherit configuration from other POMs. All POMs inherit
from the Super POM by default. The Super POM provides default configuration, such as
default source directories, default plugins, and so on.

Question: What Is Maven Archetype?

Archetype is a Maven project templating toolkit. An archetype is defined as an original


pattern or model from which all other things of the same kind are made.

Question: What Is Maven Artifact?

In Maven artifact is simply a file or JAR that is deployed to a Maven repository. An artifact
has

-Group ID

-Artifact ID

-Version string. The three together uniquely identify the artifact. All the project
dependencies are specified as artifacts.

Question: What Is Goal In Maven?

In Maven a goal represents a specific task which contributes to the building and managing
23/71
of a project.

It may be bound to 1 or many build phases. A goal not bound to any build phase could be
executed outside of the build lifecycle by its direct invocation.

Question: What Is Build Profile?

In Maven a build profile is a set of configurations. This set is used to define or override
default behaviour of Maven build.

Build profile helps the developers to customize the build process for different environments.
For example you can set profiles for Test, UAT, Pre-prod and Prod environments each with
its own configurations etc.

Question: What Are Build Phases In Maven?

There are 6 build phases. -Validate -Compile -Test -Package -Install -Deploy

Question: What Is Target, Source & Test Folders In Mavn?

Target: folder holds the compiled unit of code as part of the build process.
Source: folder usually holds java source codes. Test: directory contains all the unit
testing codes.

Question: What Is Difference Between Compile & Install?

Compile: is used to compile the source code of the project Install: installs the package
into the local repository, for use as a dependency in other projects locally.Design patterns
can also be used with Groovy. Here are important points

Question: How To Activate Maven Build Profile?

A Maven Build Profile can be activated in following ways

Using command line console input.


By using Maven settings.
Based on environment variables (User/System variables).

Linux DevOps Interview Questions

24/71
Question: What is Linux?

Linux is the best-known and most-used open source operating system. As an operating
system, Linux is a software that sits underneath all of the other software on a computer,
receiving requests from those programs and relaying these requests to the computer’s
hardware.

In many ways, Linux is similar to other operating systems such as Windows, OS X, or iOS

But Linux also is different from other operating systems in many important ways.

First, and perhaps most importantly, Linux is open source software. The code used to
create Linux is free and available to the public to view, edit, and—for users with the
appropriate skills—to contribute to.

Linux operating system is consist of 3 components which are as below:

Kernel: Linux is a monolithic kernel that is free and open source software that is
responsible for managing hardware resources for the users.
System Library: System Library plays a vital role because application programs
access Kernels feature using system library.
System Utility: System Utility performs specific and individual level tasks.

25/71
Question: What Is Difference Between Linux & Unix?

Unix and Linux are similar in many ways, and in fact, Linux was originally created to be
similar to Unix.

Both have similar tools for interfacing with the systems, programming tools, filesystem
layouts, and other key components.

However, Unix is not free. Over the years, a number of different operating systems have
been created that attempted to be “unix-like” or “unix-compatible,” but Linux has been the
most successful, far surpassing its predecessors in popularity.

Question: What Is BASH?

BASH stands for Bourne Again Shell. BASH is the UNIX shell for the GNU operating
system. So, BASH is the command language interpreter that helps you to enter your input,
and thus you can retrieve information.

In a straightforward language, BASH is a program that will understand the data entered by
the user and execute the command and gives output.

Question: What Is CronTab?

The crontab (short for "cron table") is a list of commands that are scheduled to run at
regular time intervals on computer system. The crontab command opens the crontab for
editing, and lets you add, remove, or modify scheduled tasks.

The daemon which reads the crontab and executes the commands at the right time is
called cron. It's named after Kronos, the Greek god of time.

Command syntax

crontab [-u user] file

crontab [-u user] [-l | -r | -e] [-i] [-s]

Question: What Is Daemon In Linux?

A daemon is a type of program on Linux operating systems that runs unobtrusively in the
background, rather than under the direct control of a user, waiting to be activated by the
occurrence of a specific event or condition

26/71
Unix-like systems typically run numerous daemons, mainly to accommodate requests for
services from other computers on a network, but also to respond to other programs and to
hardware activity.

Examples of actions or conditions that can trigger daemons into activity are a specific time
or date, passage of a specified time interval, a file landing in a particular directory, receipt of
an e-mail or a Web request made through a particular communication line.

It is not necessary that the perpetrator of the action or condition be aware that a daemon
is listening, although programs frequently will perform an action only because they are
aware that they will implicitly arouse a daemon.

Question: What Is Process In Linux?

Daemons are usually instantiated as processes. A process is an executing (i.e., running)


instance of a program.

Processes are managed by the kernel (i.e., the core of the operating system), which
assigns each a unique process identification number (PID).

There are three basic types of processes in Linux:

-Interactive:Interactive processes are run interactively by a user at the command line

-Batch:Batch processes are submitted from a queue of processes and are not associated
with the command line; they are well suited for performing recurring tasks when system
usage is otherwise low.

-Daemon: Daemons are recognized by the system as any processes whose parent
process has a PID of one

Question: What Is CLI In Linux?

CLI (Command Line Interface) is a type of human-computer interface that relies solely on
textual input and output.

That is, the entire display screen, or the currently active portion of it, shows
only characters (and no images), and input is usually performed entirely with a keyboard.

Questions: What Is Linux Kernel?

A kernel is the lowest level of easily replaceable software that interfaces with the hardware
in your computer.

It is responsible for interfacing all of your applications that are running in “user mode” down
27/71
to the physical hardware, and allowing processes, known as servers, to get information
from each other using inter-process communication (IPC).

There are three types of Kernals

Microkernel:A microkernel takes the approach of only managing what it has to: CPU,
memory, and IPC. Pretty much everything else in a computer can be seen as an accessory
and can be handled in user mode.

Monolithic Kernel:Monolithic kernels are the opposite of microkernels because they


encompass not only the CPU, memory, and IPC, but they also include things like device
drivers, file system management, and system server calls

Hybrid Kernel:Hybrid kernels have the ability to pick and choose what they want to run in
user mode and what they want to run in supervisor mode. Because the Linux kernel is
monolithic, it has the largest footprint and the most complexity over the other types of
kernels. This was a design feature which was under quite a bit of debate in the early days
of Linux and still carries some of the same design flaws that monolithic kernels are inherent
to have.

Question: What Is Partial Backup In Linux?

Partial backup refers to selecting only a portion of file hierarchy or a single partition to back
up.

Question: What Is Root Account?

The root account a system administrator account. It provides you full access and control of
the system.

Admin can create and maintain user accounts, assign different permission for each account
etc

Question: What Is Difference Between Cron and Anacron?

One of the main difference between cron and anacron jobs is that cron works on the
system that are running continuously.

While anacron is used for the systems that are not running continuously.

1. Other difference between the two is cron jobs can run every minute, but anacron jobs
can be run only once a day.
2. Any normal user can do the scheduling of cron jobs, but the scheduling of anacron
jobs can be done by the superuser only.
28/71
3. Cron should be used when you need to execute the job at a specific time as per the
given time in cron, but anacron should be used in when there is no any restriction for
the timing and can be executed at any time.
4. If we think about which one is ideal for servers or desktops, then cron should be used
for servers while anacron should be used for desktops or laptops.

Question: What Is Linux Loader?

Linux Loader is a boot loader for Linux operating system. It loads Linux into into the main
memory so that it can begin its operations.

Question: What Is Swap Space?

Swap space is the amount of physical memory that is allocated for use by Linux to hold
some concurrent running programs temporarily.

This condition usually occurs when Ram does not have enough memory to support all
concurrent running programs.

This memory management involves the swapping of memory to and from physical storage.

Question: What Are Linux Distributors?

There are around six hundred Linux distributors. Let us see some of the important ones
UBuntu: It is a well known Linux Distribution with a lot of pre-installed apps and easy
to use repositories libraries. It is very easy to use and works like MAC operating
system.
Linux Mint: It uses cinnamon and mate desktop. It works on windows and should be
used by newcomers.
Debian: It is the most stable, quicker and user-friendly Linux Distributors.
Fedora: It is less stable but provides the latest version of the software. It has
GNOME3 desktop environment by default.
Red Hat Enterprise: It is to be used commercially and to be well tested before
release. It usually provides the stable platform for a long time.
Arch Linux: Every package is to be installed by you and is not suitable for the
beginners.

Question: Why Do Developers Use MD5?

MD5 is an encryption method so it is used to encrypt the passwords before saving.

Question: What Are File Permissions In Linux?


29/71
There are 3 types of permissions in Linux

Read: User can read the file and list the directory.
Write: User can write new files in the directory .
Execute: User can access and run the file in a directory.

Question: Memory Management In Linux?


It is always required to keep a check on the memory usage in order to find out whether the
user is able to access the server or the resources are adequate. There are roughly 5
methods that determine the total memory used by the Linux.

This is explained as below

Free command: This is the most simple and easy to use the command to check
memory usage. For example: ‘$ free –m’, the option ‘m’ displays all the data in MBs.
/proc/meminfo: The next way to determine the memory usage is to read
/proc/meminfo file. For example: ‘$ cat /proc/meminfo’
Vmstat: This command basically lays out the memory usage statistics. For example:
‘$ vmstat –s’
Top command: This command determines the total memory usage as well as also
monitors the RAM usage.
Htop: This command also displays the memory usage along with other details.

Question: Granting Permissions In Linux?

System administrator or the owner of the file can


grant permissions using the ‘chmod’ command.
Following symbols are used while writing
permissions
chmod +x

Question: What Are Directory Commands In Linux?

Here are few important directory commands in Linux

pwd: It is a built-in command which stands for ‘print working directory’. It displays
the current working location, working path starting with / and directory of the user.
Basically, it displays the full path to the directory you are currently in.
Is: This command list out all the files in the directed folder.
cd: This stands for ‘change directory’. This command is used to change to the
30/71
directory you want to work from the present directory. We just need to type cd
followed by the directory name to access that particular directory.
mkdir: This command is used to create an entirely new directory.
rmdir: This command is used to remove a directory from the system.

Question: What Is Shell Script In Linux?

In the simplest terms, a shell script is a file containing a series of commands.

The shell reads this file and carries out the commands as though they have been entered
directly on the command line.

The shell is somewhat unique, in that it is both a powerful command line interface to the
system and a scripting language interpreter.

As we will see, most of the things that can be done on the command line can be done in
scripts, and most of the things that can be done in scripts can be done on the command
line.

We have covered many shell features, but we have focused on those features most often
used directly on the command line.

The shell also provides a set of features usually (but not always) used when writing
programs.

Question: Which Tools Are Used For Reporting


Statistics In Linux?
Some of the popular and frequently used system resource generating tools available on the
Linux platform are

vmstat
netstat
iostat
ifstat
mpstat.

These are used for reporting statistics from different system components such as virtual
memory, network connections and interfaces, CPU, input/output devices and more.

Question: What Is Dstat In Linux?


dstat is a powerful, flexible and versatile tool for generating Linux system resource
statistics, that is a replacement for all the tools mentioned in above question.

31/71
It comes with extra features, counters and it is highly extensible, users with Python
knowledge can build their own plugins.

Features of dstat:

1. Joins information from vmstat, netstat, iostat, ifstat and mpstat tools
2. Displays statistics simultaneously
3. Orders counters and highly-extensible
4. Supports summarizing of grouped block/network devices
5. Displays interrupts per device
6. Works on accurate timeframes, no timeshifts when a system is stressed
7. Supports colored output, it indicates different units in different colors
8. Shows exact units and limits conversion mistakes as much as possible
9. Supports exporting of CSV output to Gnumeric and Excel documents

Question: Types Of Processes In Linux?


There are fundamentally two types of processes in Linux:

Foreground processes (also referred to as interactive processes) – these are


initialized and controlled through a terminal session. In other words, there has to be a
user connected to the system to start such processes; they haven’t started
automatically as part of the system functions/services.
Background processes (also referred to as non-interactive/automatic processes) –
are processes not connected to a terminal; they don’t expect any user input.

Question: Creatin Of Processes In Linux?


A new process is normally created when an existing process makes an exact copy of itself
in memory.

The child process will have the same environment as its parent, but only the process ID
number is different.

There are two conventional ways used for creating a new process in Linux:

Using The System() Function – this method is relatively simple, however, it’s
inefficient and has significantly certain security risks.
Using fork() and exec() Function – this technique is a little advanced but offers
greater flexibility, speed, together with security.

Question: Creation Of Processes In Linux?

32/71
Because Linux is a multi-user system, meaning different users can be running various
programs on the system, each running instance of a program must be identified uniquely
by the kernel.

And a program is identified by its process ID (PID) as well as it’s parent processes ID
(PPID), therefore processes can further be categorized into:

Parent processes – these are processes that create other processes during run-
time.
Child processes – these processes are created by other processes during run-time.

Question: What Is Init Process Linux?

lnit process is the mother (parent) of all processes on the system, it’s the first program that
is executed when the Linux system boots up; it manages all other processes on the
system. It is started by the kernel itself, so in principle it does not have a parent process.

The init process always has process ID of 1. It functions as an adoptive parent for all
orphaned processes.

You can use the pidof command to find the ID of a process:

# pidof systemd
# pidof top
# pidof httpd

Find Linux Process ID

To find the process ID and parent process ID of the current shell, run:

$ echo $$
$ echo $PPID

Question: What Are Different States Of A


Processes In Linux?

During execution, a process changes from one state to another depending on its
environment/circumstances. In Linux, a process has the following possible states:

Running – here it’s either running (it is the current process in the system) or it’s
ready to run (it’s waiting to be assigned to one of the CPUs).
Waiting – in this state, a process is waiting for an event to occur or for a system
resource. Additionally, the kernel also differentiates between two types of waiting
processes; interruptible waiting processes – can be interrupted by signals and
uninterruptible waiting processes – are waiting directly on hardware conditions and
cannot be interrupted by any event/signal.
Stopped – in this state, a process has been stopped, usually by receiving a signal.
For instance, a process that is being debugged.
33/71
Zombie – here, a process is dead, it has been halted but it’s still has an entry in the
process table.

Question: How To View Active Processes In Linux?


There are several Linux tools for viewing/listing running processes on the system, the two
traditional and well known are ps and top commands:

1. ps Command

It displays information about a selection of the active processes on the system as shown
below:

#ps

#ps -e ] head

2. top – System Monitoring Tool

top is a powerful tool that offers you a dynamic real-time view of a running system as shown
in the screenshot below:

#top

3. glances – System Monitoring Tool

glances is a relatively new system monitoring tool with advanced features:

#glances

Question: How To Control Process?


Linux also has some commands for controlling processes such as kill, pkill, pgrep and
killall, below are a few basic examples of how to use them:

$ pgrep -u tecmint top


$ kill 2308
$ pgrep -u tecmint top
$ pgrep -u tecmint glances
$ pkill glances
$ pgrep -u tecmint glances

Question: Can We Send signals To Processes In


Linux?

The fundamental way of controlling processes in Linux is by sending signals to them. There
are multiple signals that you can send to a process, to view all the signals run:

34/71
$ kill -l

List All Linux Signals


To send a signal to a process, use the kill, pkill or pgrep commands we mentioned earlier
on. But programs can only respond to signals if they are programmed to recognize those
signals.

And most signals are for internal use by the system, or for programmers when they write
code. The following are signals which are useful to a system user:

SIGHUP 1 – sent to a process when its controlling terminal is closed.


SIGINT 2 – sent to a process by its controlling terminal when a user interrupts the
process by pressing [Ctrl+C] .
SIGQUIT 3 – sent to a process if the user sends a quit signal [Ctrl+D] .
SIGKILL 9 – this signal immediately terminates (kills) a process and the process will
not perform any clean-up operations.
SIGTERM 15 – this a program termination signal (kill will send this by default).
SIGTSTP 20 – sent to a process by its controlling terminal to request it to stop
(terminal stop); initiated by the user pressing [Ctrl+Z] .

Question: How To Change Priority Of A Processes


In Linux?
On the Linux system, all active processes have a priority and certain nice value. Processes
with higher priority will normally get more CPU time than lower priority processes.

However, a system user with root privileges can influence this with
the nice and renice commands.

From the output of the top command, the NI shows the process nice value:

$ top

List Linux Running Processes


Use the nice command to set a nice value for a process. Keep in mind that normal users
can attribute a nice value from zero to 20 to processes they own. Only the root user can
use negative nice values.

To renice the priority of a process, use the renice command as follows:

$ renice +8 2687
$ renice +8 2103

GIT DevOps Interview Questions

Question: What is Git?

35/71
Git is a version control system for tracking changes in computer files and coordinating work
on those files among multiple people.

It is primarily used for source code management in software development but it can be
used to keep track of changes in any set of files.

As a distributed revision control system it is aimed at speed, data integrity, and support for
distributed, non-linear workflows.

By far, the most widely used modern version control system in the world today is Git. Git is
a mature, actively maintained open source project originally developed in 2005 by Linus
Torvald. Git is an example of a Distributed Version Control System, In Git, every
developer's working copy of the code is also a repository that can contain the full history of
all changes.

Question: What Are Benefits Of GIT?

Here are some of the advantages of using Git

Ease of use
Data redundancy and replication
High availability
Superior disk utilization and network performance
Only one .git directory per repository
Collaboration friendly
Any kind of projects from large to small scale can use GIT

Question: What Is Repository In GIT?

The purpose of Git is to manage a project, or a set of files, as they change over time. Git
stores this information in a data structure called a repository. A git repository contains,
among other things, the following:

A set of commit objects.


A set of references to commit objects, called heads.

The Git repository is stored in the same directory as the project itself, in a subdirectory
called .git. Note differences from central-repository systems like CVS or Subversion:
There is only one .git directory, in the root directory of the project.
The repository is stored in files alongside the project. There is no central server
repository.

Question: What Is Staging Area In GIT?

36/71
Staging is a step before the commit process in git. That is, a commit in git is performed in
two steps:

-Staging and

-Actual commit

As long as a change set is in the staging area, git allows you to edit it as you like
(replace staged files with other versions of staged files, remove changes from staging, etc.)

Question: What Is GIT STASH?

Often, when you’ve been working on part of your project, things are in a messy state and
you want to switch branches for a bit to work on something else.

The problem is, you don’t want to do a commit of half-done work just so you can get back to
this point later. The answer to this issue is the git stash command. Stashing takes the
dirty state of your working directory — that is, your modified tracked files and staged
changes — and saves it on a stack of unfinished changes that you can reapply at any time.

Question: How To Revert Commit In GIT?

Given one or more existing commits, revert the changes that the related patches introduce,
and record some new commits that record them. This requires your working tree to be
clean (no modifications from the HEAD commit).

git-revert - Revert some existing commits

SYNOPSIS

git revert [--[no-]edit] [-n] [-m parent-number] [-s] [-S[<keyid>]] <commit>…


git revert --continue
git revert --quit
git revert --abort

Question: How To Delete Remote Repository In GIT?

Use the git remote rm command to remove a remote URL from your repository.
The git remote rm command takes one argument:

A remote name, for example, destination

Questions: What Is GIT Stash Drop?

37/71
In case we do not need a specific stash, we use git stash drop command to remove it from
the list of stashes.

By default, this command removes to latest added stash

To remove a specific stash we specify as argument in the git stash drop <stashname>
command.

Question: What Is Difference Between GIT and Subversion?

Here is a summary of Differences between GIT and Subversion

Git is a distributed VCS; SVN is a non-distributed VCS.


Git has a centralized server and repository; SVN does not have a centralized server
or repository.
The content in Git is stored as metadata; SVN stores files of content.
Git branches are easier to work with than SVN branches.
Git does not have the global revision number feature like SVN has.
Git has better content protection than SVN.
Git was developed for Linux kernel by Linus Torvalds; SVN was developed by
CollabNet, Inc.
Git is distributed under GNU, and its maintenance overseen by Junio
Hamano; Apache Subversion, or SVN, is distributed under the open source license.

Question: What Is Difference Between GIT Fetch & GIT Pull?

GIT fetch – It downloads only the new data from the remote repository and does not
integrate any of the downloaded data into your working files. Providing a view of the data is
all it does.

GIT pull – It downloads as well as merges the data from the remote repository into the local
working files.

This may also lead to merging conflicts if the user’s local changes are not yet committed.
Using the “GIT stash” command hides the local changes.

Question: What is Git fork? How to create tag?

A fork is a copy of a repository. Forking a repository allows you to freely experiment with
changes without affecting the original project.

A fork is really a Github (not Git) construct to store a clone of the repo in your user account.
As a clone, it will contain all the branches in the main repo at the time you made the fork.

38/71
Create Tag:

Click the releases link on our repository page.


Click on Create a new release or Draft a new release.
Fill out the form fields, then click Publish release at the bottom.
After you create your tag on GitHub, you might want to fetch it into your local
repository too: git fetch.

Question: What is difference between fork and branch?

A fork is a copy of a repository. Forking a repository allows you to freely experiment with
changes without affecting the original project.
A fork is really a Github (not Git) construct to store a clone of the repo in your user account.
As a clone, it will contain all the branches in the main repo at the time you made the fork.

Question: What Is Cherry Picking In GIT?

Cherry picking in git means to choose a commit from one branch and apply it onto another.

This is in contrast with other ways such as merge and rebase which normally applies many
commits onto a another branch.

Make sure you are on the branch you want apply the commit to. git checkout master
Execute the following:

git cherry-pick <commit-hash>

Question: What Language GIT Is Written In?

Much of Git is written in C, along with some BASH scripts for UI wrappers and other bits.

Question: How To Rebase Master In GIT?

Rebasing is the process of moving a branch to a new base commit.The golden rule of git
rebase is to never use it on public branches.

The only way to synchronize the two master branches is to merge them back together,
resulting in an extra merge commit and two sets of commits that contain the same
changes.

Question: What is ‘head’ in git and how many heads can be created in a
repository?

39/71
There can be any number of heads in a GIT repository. By default there is one head known
as HEAD in each repository in GIT.

HEAD is a ref (reference) to the currently checked out commit. In normal states, it's actually
a symbolic ref to the branch user has checked out.

if you look at the contents of .git/HEAD you'll see something like "ref: refs/heads/master".
The branch itself is a reference to the commit at the tip of the branch

Question: Name some GIT commands and also


explain their functions?

Here are some most important GIT commands

GIT diff – It shows the changes between commits, commits and working tree.
GIT status – It shows the difference between working directories and index.
GIT stash applies – It is used to bring back the saved changes on the working
directory.
GIT rm – It removes the files from the staging area and also of the disk.
GIT log – It is used to find the specific commit in the history.
GIT add – It adds file changes in the existing directory to the index.
GIT reset – It is used to reset the index and as well as the working directory to the
state of the last commit.
GIT checkout – It is used to update the directories of the working tree with those
from another branch without merging.
GIT Is tree – It represents a tree object including the mode and the name of each
item.
GIT instaweb – It automatically directs a web browser and runs the web server with
an interface into your local repository.

Question: What is a “conflict” in GIT and how is it resolved?

When a commit that has to be merged has some


changes in one place, which also has the changes
of current commit, then the conflict arises.
The GIT will not be able to predict which change will
take the precedence. In order to resolve the conflict
in GIT: we have to edit the files to fix the conflicting
changes and then add the resolved files by running
the “GIT add” command; later on, to commit the

40/71
repaired merge run the “GIT commit” command. GIT
identifies the position and sets the parents of the
commit correctly.

Question: How To Migrate From Subversion To


GIT?
SubGIT is a tool for smooth and stress-free subversion to GIT migration and also a solution
for a company-wide subversion to GIT migration that is:

It allows to make use of all GIT and subversion features.


It provides genuine stress-free migration experience.
It doesn’t require any change in the infrastructure that is already placed.
It is considered to be much better than GIT-SVN

Question: What Is Index In GIT?

The index is a single, large, binary file in under .git folder, which lists all files in the current
branch, their sha1 checksums, time stamps and the file name. Before completing the
commits, it is formatted and reviewed in an intermediate area known as Index also known
as the staging area.

Question: What is a bare Git repository?

A bare Git repository is a repository that is created without a Working Tree.

git init --bare

Question: WHow do you revert a commit that has


already been pushed and made public??
One or more commits can be reverted through the use of git revert. This command, in
essence, creates a new commit with patches that cancel out the changes introduced in
specific commits.

In case the commit that needs to be reverted has already been published or changing the
repository history is not an option, git revert can be used to revert commits. Running the
following command will revert the last two commits:

git revert HEAD~2..HEAD

41/71
Alternatively, one can always checkout the state of a particular commit from the past, and
commit it anew.

Question: How do you squash last N commits into a


single commit?

Squashing multiple commits into a single commit will overwrite history, and should be done
with caution. However, this is useful when working in feature branches.

To squash the last N commits of the current branch, run the following command (with {N}
replaced with the number of commits that you want to squash):

git rebase -i HEAD~{N}

Upon running this command, an editor will open with a list of these N commit messages,
one per line.

Each of these lines will begin with the word “pick”. Replacing “pick” with “squash” or “s” will
tell Git to combine the commit with the commit before it.

To combine all N commits into one, set every commit in the list to be squash except the first
one.

Upon exiting the editor, and if no conflict arises, git rebase will allow you to create a new
commit message for the new combined commit.

Question: What is a conflict in git and how can it be resolved?

A conflict arises when more than one commit that has to be merged has some change in
the same place or same line of code.

Git will not be able to predict which change should take precedence. This is a git conflict.

To resolve the conflict in git, edit the files to fix the conflicting changes and then add the
resolved files by running git add .

After that, to commit the repaired merge, run git commit . Git remembers that you are in
the middle of a merge, so it sets the parents of the commit correctly.

Question: How To Setup A Script To Run Every


Time a Repository Receives New Commits Through
Push?

42/71
To configure a script to run every time a repository receives new commits through push,
one needs to define either a pre-receive, update, or a post-receive hook depending on
when exactly the script needs to be triggered.

Pre-receive hook in the destination repository is invoked when commits are pushed to it.
Any script bound to this hook will be executed before any references are updated.

This is a useful hook to run scripts that help enforce development policies.

Update hook works in a similar manner to pre-receive hook, and is also triggered before
any updates are actually made.

However, the update hook is called once for every commit that has been pushed to the
destination repository.

Finally, post-receive hook in the repository is invoked after the updates have been accepted
into the destination repository.

This is an ideal place to configure simple deployment scripts, invoke some continuous
integration systems, dispatch notification emails to repository maintainers, etc.

Hooks are local to every Git repository and are not versioned. Scripts can either be created
within the hooks directory inside the “.git” directory, or they can be created elsewhere and
links to those scripts can be placed within the directory.

Question: What Is Commit Hash?

In Git each commit is given a unique hash. These hashes can be used to identify the
corresponding commits in various scenarios (such as while trying to checkout a particular
state of the code using the git checkout {hash} command).

Additionally, Git also maintains a number of aliases to certain commits, known as refs.

Also, every tag that you create in the repository effectively becomes a ref (and that is
exactly why you can use tags instead of commit hashes in various git commands).

Git also maintains a number of special aliases that change based on the state of the
repository, such as HEAD, FETCH_HEAD, MERGE_HEAD, etc.

Git also allows commits to be referred as relative to one another. For example, HEAD~1
refers to the commit parent to HEAD, HEAD~2 refers to the grandparent of HEAD, and so
on.

In case of merge commits, where the commit has two parents, ^ can be used to select one
of the two parents, e.g. HEAD^2 can be used to follow the second parent.

And finally, refspecs. These are used to map local and remote branches together.

However, these can be used to refer to commits that reside on remote branches allowing
one to control and manipulate them from a local Git environment.
43/71
Question: What Is Conflict In GIT?

A conflict arises when more than one commit that has to be merged has some change in
the same place or same line of code.

Git will not be able to predict which change should take precedence. This is a git conflict.To
resolve the conflict in git, edit the files to fix the conflicting changes and then add the
resolved files by running git add . After that, to commit the repaired merge, run git
commit . Git remembers that you are in the middle of a merge, so it sets the parents of the
commit correctly.

Question: What are git hooks??

Git hooks are scripts that can run automatically on the occurrence of an event in a Git
repository. These are used for automation of workflow in GIT. Git hooks also help in
customizing the internal behavior of GIT. These are generally used for enforcing a GIT
commit policy.

Question: What Are Disadvantages Of GIT?


GIT has very few disadvantages. These are the scenarios when GIT is difficult to use.

Some of these are:

Binary Files: If we have a lot binary files (non-text) in our project, then GIT becomes very
slow. E.g. Projects with a lot of images or Word documents.

Steep Learning Curve: It takes some time for a newcomer to learn GIT. Some of the GIT
commands are non-intuitive to a fresher.

Slow remote speed: Sometimes the use of remote repositories in slow due to network
latency. Still GIT is better than other VCS in speed.

Question: What is stored inside a commit object in GIT?

GIT commit object contains following information:

SHA1 name: A 40 character string to identify a commit

Files: List of files that represent the state of a project at a specific point of time

Reference: Any reference to parent commit objects

44/71
Question: What Is GIT reset command?
Git reset command is used to reset current HEAD to a specific state. By default it reverses
the action of git add command. So we use git reset command to undo the changes of git
add command. Reference: Any reference to parent commit objects

Question: How GIT protects the code in a


repository?
GIT is made very secure since it contains the source code of an organization. All the
objects in a GIT repository are encrypted with a hashing algorithm called SHA1.

This algorithm is quite strong and fast. It protects source code and other contents of
repository against the possible malicious attacks.

This algorithm also maintains the integrity of GIT repository by protecting the change
history against accidental changes.

Continuos Integration Interview Questions

Question: What is Continuos Integration?

Continuous Integration is the process of continuously integrating the code and often
multiple times per day. The purpose is to find problems quickly, s and deliver the fixes more
rapidly.

CI is a best practice for software development. It is done to ensure that after every code
change there is no issue in software.

Question: What Is Build Automation?

Build automation is the process of automating the creation of a software build and the
associated processes.

Including compiling computer source code into binary code, packaging binary code, and
running automated tests.

Question: What Is Automated Deployment?

Automated Deployment is the process of consistently pushing a product to various


environments on a “trigger.”

45/71
It enables you to quickly learn what to expect every time you deploy an environment with
much faster results.

This combined with Build Automation can save development teams a significant amount of
hours.

Automated Deployment saves clients from being extensively offline during development
and allows developers to build while “touching” fewer of a clients’ systems.

With an automated system, human error is prevented. In the event of human error,
developers are able to catch it before live deployment – saving time and headache.

You can even automate the contingency plan and make the site rollback to a working or
previous state as if nothing ever happened.

Clearly, this automated feature is super valuable in allowing applications and sites to
continue during fixes.

Additionally, contingency plans can be version-controlled, improved and even self-tested.

Question: How Continuous Integration Implemented?

Different tools for supporting Continuous Integration are Hudson, Jenkins and Bamboo.
Jenkins is the most popular one currently. They provide integration with various version
control systems and build tools.

Question: How Continuous Integration process does work?

Whenever developer commits changes in version control system, then Continuous


Integration server detects that changes are committed. And goes through following process

Continuous Integration server retrieves latest copy of changes.


It build code with new changes in build tools.
If build fails notify to developer.
After build pass run automated test cases if test cases fail notify to developer.
Create package for deployment environment.

Question: What Are The Software Required For Continuous


Integration process?

Here are the minimum tools you need to achieve CI

Source code repository : To commit code and changes for example git.
Server: It is Continuous Integration software for example Jenkin, Teamcity.
46/71
Build tool: It builds application on particular way for example maven, gradle.
Deployment environment : On which application will be deployed.

Question: What Is Jenkins Software?

Jenkins is self-contained, open source automation server used to automate all sorts of
tasks related to building, testing, and delivering or deploying software.

Jenkins is one of the leading open source automation servers available. Jenkins has an
extensible, plugin-based architecture, enabling developers to create 1,400+ plugins to
adapt it to a multitude of build, test and deployment technology integrations.

Questions: What is a Jenkins Pipeline?

Jenkins Pipeline (or simply “Pipeline”) is a suite of plugins which supports implementing
and integrating continuous delivery pipelines into Jenkins..

Question: What is the difference between Maven,


Ant,Gradle and Jenkins ?

Maven and Ant are Build Technologies whereas Jenkins is a continuous integration tool.

Question: Why do we use Jenkins?

Jenkins is an open-source continuous integration software tool written in the Java


programming language for testing and reporting on isolated changes in a larger code base
in real time.

The Jenkins software enables developers to find and solve defects in a code base rapidly
and to automate testing of their builds.

Question: What are CI Tools??

Here is the list of the top 8 Continuous Integration tools:


Jenkins
TeamCity
Travis CI
Go CD
Bamboo
GitLab CI
47/71
CircleCI
Codeship

Question: Which SCM tools Jenkins supports??

Jenkins supports version control tools, including AccuRev, CVS, Subversion, Git, Mercurial,
Perforce, ClearCase and RTC, and can execute Apache Ant, Apache Maven and arbitrary
shell scripts and Windows batch commands.

Question: Why do we use Pipelines in Jenkins?

Pipeline adds a powerful set of automation tools onto Jenkins, supporting use cases that
span from simple continuous integration to comprehensive continuous delivery pipelines.

By modeling a series of related tasks, users can take advantage of the many features of
Pipeline:

Code: Pipelines are implemented in code and typically checked into source control,
giving teams the ability to edit, review, and iterate upon their delivery pipeline.
Durable: Pipelines can survive both planned and unplanned restarts of the Jenkins
master.
Pausable: Pipelines can optionally stop and wait for human input or approval before
continuing the Pipeline run.
Versatile: Pipelines support complex real-world continuous delivery requirements,
including the ability to fork/join, loop, and perform work in parallel.
Extensible: The Pipeline plugin supports custom extensions to its DSL and multiple
options for integration with other plugins.

Question: How do you create Multibranch Pipeline in Jenkins?

The Multi branch Pipeline project type enables you to implement different Jenkins files for
different branches of the same project.

In a Multi branch Pipeline project, Jenkins automatically discovers, manages and executes
Pipelines for branches which contain a Jenkins file in source control.

Question: What are Jobs in Jenkins??

Jenkins can be used to perform the typical build server work, such as doing
continuous/official/nightly builds, run tests, or perform some repetitive batch tasks. This is
called “free-style software project” in Jenkins.

48/71
Question: How do you configuring automatic
builds in Jenkins?

Builds in Jenkins can be triggered periodically (on a schedule, specified in configuration),


or when source changes in the project have been detected, or they can be automatically
triggered by requesting the URL:

Question: What is a Jenkins file?

Jenkins file is a text file containing the definition of a


Jenkins Pipeline and checks into source control.
Amazon AWS DevOps Interview Questions

Question: What is Amazon Web Services?

Amazon Web Services provides services that help you practice DevOps at your company
and that are built first for use with AWS.

These tools automate manual tasks, help teams manage complex environments at scale,
and keep engineers in control of the high velocity that is enabled by DevOps

Question: What Are Benefits Of AWS for DevOps?

There are many benefits of using AWS for devops

Get Started Fast: Each AWS service is ready to use if you have an AWS account. There is
no setup required or software to install.

Fully Managed Services: These services can help you take advantage of AWS resources
quicker. You can worry less about setting up, installing, and operating infrastructure on
your own. This lets you focus on your core product.

Built For Scalability: You can manage a single instance or scale to thousands using AWS
services. These services help you make the most of flexible compute resources by
simplifying provisioning, configuration, and scaling.

Programmable: You have the option to use each service via the AWS Command Line
Interface or through APIs and SDKs. You can also model and provision AWS resources
and your entire AWS infrastructure using declarative AWS CloudFormation templates.

Automation: AWS helps you use automation so you can build faster and more efficiently.
Using AWS services, you can automate manual tasks or processes such as deployments,
49/71
development & test workflows, container management, and configuration management.

Secure: Use AWS Identity and Access Management (IAM) to set user permissions and
policies. This gives you granular control over who can access your resources and how they
access those resources.

Question: How To Handle Continuous Integration and


Continuous Delivery in AWS Devops?

The AWS Developer Tools help in securely store and version your application’s source
code and automatically build, test, and deploy your application to AWS.

Question: What Is The Importance Of Buffer In Amazon Web


Services?

An Elastic Load Balancer ensures that the incoming traffic is distributed optimally across
various AWS instances.

A buffer will synchronize different components and makes the arrangement additional
elastic to a burst of load or traffic.

The components are prone to work in an unstable way of receiving and processing the
requests.

The buffer creates the equilibrium linking various apparatus and crafts them effort at the
identical rate to supply more rapid services.

Question: What Are The Components Involved In Amazon


Web Services?

There are 4 components

Amazon S3 : with this, one can retrieve the key information which are occupied in creating
cloud structural design and amount of produced information also can be stored in this
component that is the consequence of the key specified.

Amazon EC2 instance : helpful to run a large distributed system on the Hadoop cluster.
Automatic parallelization and job scheduling can be achieved by this component.

Amazon SQS : this component acts as a mediator between different controllers. Also worn
for cushioning requirements those are obtained by the manager of Amazon.

Amazon SimpleDB : helps in storing the transitional position log and the errands executed
by the consumers.
50/71
Question: How is a Spot instance different from an On-
Demand instance or Reserved Instance?

Spot Instance, On-Demand instance and Reserved Instances are all models for pricing.
Moving along, spot instances provide the ability for customers to purchase compute
capacity with no upfront commitment, at hourly rates usually lower than the On-Demand
rate in each region.

Spot instances are just like bidding, the bidding price is called Spot Price. The Spot Price
fluctuates based on supply and demand for instances, but customers will never pay more
than the maximum price they have specified.

If the Spot Price moves higher than a customer’s maximum price, the customer’s EC2
instance will be shut down automatically.

But the reverse is not true, if the Spot prices come down again, your EC2 instance will not
be launched automatically, one has to do that manually.

In Spot and On demand instance, there is no commitment for the duration from the user
side, however in reserved instances one has to stick to the time period that he has chosen.

Questions: What are the best practices for Security in Amazon EC2?

There are several best practices to secure Amazon EC2. A few of them are given below:

Use AWS Identity and Access Management (IAM) to control access to your AWS
resources.
Restrict access by only allowing trusted hosts or networks to access ports on your
instance.
Review the rules in your security groups regularly, and ensure that you apply the
principle of least
Privilege – only open up permissions that you require.
Disable password-based logins for instances launched from your AMI. Passwords
can be found or cracked, and are a security risk.

Question: What is AWS CodeBuild in AWS Devops?

AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and
produces software packages that are ready to deploy.

With CodeBuild, you don’t need to provision, manage, and scale your own build servers.
CodeBuild scales continuously and processes multiple builds concurrently, so your builds
are not left waiting in a queue.

51/71
Question: What is Amazon Elastic Container Service in AWS Devops?

Amazon Elastic Container Service (ECS) is a highly scalable, high performance container
management service that supports Docker containers and allows you to easily run
applications on a managed cluster of Amazon EC2 instances.

Question: What is AWS Lambda in AWS Devops?

AWS Lambda lets you run code without provisioning or managing servers. With Lambda,
you can run code for virtually any type of application or backend service, all with zero
administration.
Just upload your code and Lambda takes care of everything required to run and scale your
code with high availability.

Splunk DevOps Interview Questions

Question: What is Splunk?

The platform of Splunk allows you to get visibility into machine data generated from
different networks, servers, devices, and hardware.

It can give insights into the application management, threat visibility, compliance, security,
etc. so it is used to analyze machine data. The data is collected from the forwarder from the
source and forwarded to the indexer. The data is stored locally on a host machine or cloud.
Then on the data stored in the indexer the search head searches, visualizes, analyzes and
performs various other functions.

Question: What Are The Components Of Splunk?

The main components of Splunk are Forwarders, Indexers and Search Heads.Deployment
Server(or Management Console Host) will come into the picture in case of a larger
environment.

Deployment servers act like an antivirus policy server for setting up Exceptions and Groups
so that you can map and create adifferent set of data collection policies each for either
window based server or a Linux based server or a Solaris based server. plunk has four
important components :

Indexer – It indexes the machine data


Forwarder – Refers to Splunk instances that forward data to the remote indexers
Search Head – Provides GUI for searching
Deployment Server –Manages the Splunk components like indexer, forwarder, and

52/71
search head in computing environment.

Question: What are alerts in Splunk?

An alert is an action that a saved search triggers on regular intervals set over a time range,
based on the results of the search.

When the alerts are triggered, various actions occur consequently.. For instance, sending
an email when a search to the predefined list of people is triggered.

Three types of alerts:

1. Pre-result alerts : Most commonly used alert type and runs in real-time for an all-
time span. These alerts are designed such that whenever a search returns a result,
they are triggered.
2. Scheduled alerts : The second most common- scheduled results are set up to
evaluate the results of a historical search result running over a set time range on a
regular schedule. You can define a time range, schedule and the trigger condition to
an alert.
3. Rolling-window alerts: These are the hybrid of pre-result and scheduled alerts.
Similar to the former, these are based on real-time search but do not trigger each
time the search returns a matching result . It examines all events in real-time mapping
within the rolling window and triggers the time that specific condition by that event in
the window is met, like the scheduled alert is triggered on a scheduled search.

Question: What Are The Categories Of SPL Commands?

SPL commands are divided into five categories:

1. Sorting Results – Ordering results and (optionally) limiting the number of results.
2. Filtering Results – It takes a set of events or results and filters them into a smaller
set of results.
3. Grouping Results – Grouping events so you can see patterns.
4. Filtering, Modifying and Adding Fields – Taking search results and generating a
summary for reporting.
5. Reporting Results – Filtering out some fields to focus on the ones you need, or
modifying or adding fields to enrich your results or events.

Question: What Happens If The License Master Is


Unreachable?

In case the license master is unreachable, then it is just not possible to search the data.

53/71
However, the data coming in to the Indexer will not be affected. The data will continue to
flow into your Splunk deployment.

The Indexers will continue to index the data as usual however, you will get a warning
message on top your Search head or web UI saying that you have exceeded the indexing
volume.

And you either need to reduce the amount of data coming in or you need to buy a higher
capacity of license. Basically, the candidate is expected to answer that the indexing does
not stop; only searching is halted

Question: What are common port numbers used by Splunk?

Common port numbers on which default services run are:

Service Port Number

Splunk Management Port 8089

Splunk Index Replication Port 8080

KV store 8191

Splunk Web Port 8000

Splunk Indexing Port 9997

Splunk network port 514

Question: What Are Splunk Buckets? Explain The Bucket


Lifecycle?

A directory that contains indexed data is known as a Splunk bucket. It also contains events
of a certain period. Bucket lifecycle includes following stages:

Hot – It contains newly indexed data and is open for writing. For each index, there
are one or more hot buckets available
Warm – Data rolled from hot
Cold – Data rolled from warm
Frozen – Data rolled from cold. The indexer deletes frozen data by default but users
can also archive it.
Thawed – Data restored from an archive. If you archive frozen data , you can later
return it to the index by thawing (defrosting) it.

Question: Explain Data Models and Pivot?

54/71
Data models are used for creating a structured hierarchical model of data. It can be used
when you have a large amount of unstructured data, and when you want to make use of
that information without using complex search queries.

A few use cases of Data models are:

Create Sales Reports: If you have a sales report, then you can easily create the total
number of successful purchases, below that you can create a child object containing
the list of failed purchases and other views
Set Access Levels: If you want a structured view of users and their various access
levels, you can use a data model

On the other hand with pivots, you have the flexibility to create the front views of your
results and then pick and choose the most appropriate filter for a better view of results.

Question: What Is File Precedence In Splunk?

File precedence is an important aspect of troubleshooting in Splunk for an administrator,


developer, as well as an architect.

All of Splunk’s configurations are written in .conf files. There can be multiple copies present
for each of these files, and thus it is important to know the role these files play when a
Splunk instance is running or restarted. To determine the priority among copies of a
configuration file, Splunk software first determines the directory scheme. The directory
schemes are either a) Global or b) App/user. When the context is global (that is, where
there’s no app/user context), directory priority descends in this order:

1. System local directory — highest priority


2. App local directories
3. App default directories
4. System default directory — lowest priority

When the context is app/user, directory priority descends from user to app to system:
1. User directories for current user — highest priority
2. App directories for currently running app (local, followed by default)
3. App directories for all other apps (local, followed by default) — for exported settings
only
4. System directories (local, followed by default) — lowest priority

Question: Difference Between Search Time And Index Time Field


Extractions?

Search time field extraction refers to the fields extracted while performing searches.

Whereas, fields extracted when the data comes to the indexer are referred to as Index time
field extraction.
55/71
You can set up the indexer time field extraction either at the forwarder level or at the
indexer level.

Another difference is that Search time field extraction’s extracted fields are not part of the
metadata, so they do not consume disk space.

Whereas index time field extraction’s extracted fields are a part of metadata and hence
consume disk space.

Question: What Is Source Type In Splunk?


Source type is a default field which is used to identify the data structure of an incoming
event. Source type determines how Splunk Enterprise formats the data during the indexing
process.
Source type can be set at the forwarder level for indexer extraction to identify different data
formats.

Question: What is SOS?

SOS stands for Splunk on Splunk. It is a Splunk app that provides graphical view of your
Splunk environment performance and issues.
It has following purposes:
Diagnostic tool to analyze and troubleshoot problems
Examine Splunk environment performance
Solve indexing performance issues
Observe scheduler activities and issues
See the details of scheduler and user driven search activity
Search, view and compare configuration files of Splunk

Question: What Is Splunk Indexer And Explain Its Stages?

The indexer is a Splunk Enterprise component that creates and manages indexes. The
main functions of an indexer are:
Indexing incoming data
Searching indexed data Splunk indexer has following stages:

Input : Splunk Enterprise acquires the raw data from various input sources and breaks it
into 64K blocks and assign them some metadata keys. These keys include host, source
and source type of the data. Parsing : Also known as event processing, during this stage,
the Enterprise analyzes and transforms the data, breaks data into streams, identifies,
parses and sets timestamps, performs metadata annotation and transformation of data.
Indexing : In this phase, the parsed events are written on the disk index including both
compressed data and the associated index files. Searching : The ‘Search’ function plays a
56/71
major role during this phase as it handles all searching aspects (interactive, scheduled
searches, reports, dashboards, alerts) on the indexed data and stores saved searches,
events, field extractions and views

Question: State The Difference Between Stats and Eventstats


Commands?

Stats – This command produces summary statistics of all existing fields in your search
results and store them as values in new fields. Eventstats – It is same as stats command
except that aggregation results are added in order to every event and only if the
aggregation is applicable to that event. It computes the requested statistics similar to stats
but aggregates them to the original raw data.

log4J DevOps Interview Questions

Question: What is log4j?

log4j is a reliable, fast and flexible logging framework (APIs) written in Java, which is
distributed under the Apache Software License.

log4j has been ported to the C, C++, C#, Perl, Python, Ruby, and Eiffel languages.

log4j is highly configurable through external configuration files at runtime. It views the
logging process in terms of levels of priorities and offers mechanisms to direct logging
information to a great variety of destinations.

Question: What Are The Features Of Log4j

Log4j is widely used framework and here are features of log4j

It is thread-safe.It is optimized for speed


It is based on a named logger hierarchy.
It supports multiple output appenders per logger.
It supports internationalization.
It is not restricted to a predefined set of facilities.
Logging behavior can be set at runtime using a configuration file.
It is designed to handle Java Exceptions from the start.
It uses multiple levels, namely ALL, TRACE, DEBUG, INFO, WARN, ERROR and
FATAL.
The format of the log output can be easily changed by extending the Layout class.
The target of the log output as well as the writing strategy can be altered by
implementations of the Appender interface.
It is fail-stop. However, although it certainly strives to ensure delivery, log4j does not
57/71
guarantee that each log statement will be delivered to its destination.

Question: What are the components of log4j?

log4j has three main components

loggers: Responsible for capturing logging information.


appenders: Responsible for publishing logging information to various preferred
destinations.
layouts: Responsible for formatting logging information in different styles.

Question: How do you initialize and use Log4J ?

public class LoggerTest { static Logger log = Logger.getLogger


(LoggerTest.class.getName()); public void my logerMethod() { if(log.isDebugEnabled())
log.debug("This is test message" + var2); ) } }

Question: What are Pros and Cons of Logging?

Following are the Pros and Cons of Logging Logging is an important component of the
software development. A well-written logging code offers quick debugging, easy
maintenance, and structured storage of an application's runtime information. Logging does
have its drawbacks also. It can slow down an application. If too verbose, it can cause
scrolling blindness. To alleviate these concerns, log4j is designed to be reliable, fast and
extensible. Since logging is rarely the main focus of an application, the log4j API strives to
be simple to understand and to use.

Question:What Is The Purpose Of Logger Object?

Logger Object − The top-level layer of log4j architecture is the Logger which provides the
Logger object.

The Logger object is responsible for capturing logging information and they are stored in a
namespace hierarchy.

Question: What is the purpose of Layout object?

58/71
The layout layer of log4j architecture provides objects which are used to format logging
information in different styles. It provides support to appender objects before publishing
logging information.

Layout objects play an important role in publishing logging information in a way that is
human-readable and reusable.

Questions: What is the purpose of Appender object?

The Appender object is responsible for publishing logging information to various preferred
destinations such as a database, file, console, UNIX Syslog, etc.

Question: What Is The Purpose Of ObjectRenderer Object?

The ObjectRenderer object is specialized in providing a String representation of different


objects passed to the logging framework.

This object is used by Layout objects to prepare the final logging information.

Question: What Is LogManager object?

The LogManager object manages the logging framework. It is responsible for reading the
initial configuration parameters from a system-wide configuration file or a configuration
class.

Question: How Will You Define A File Appender Using


Log4j.properties?

Following syntax defines a file appender −


log4j.appender.FILE=org.apache.log4j.FileAppender
log4j.appender.FILE.File=${log}/log.out

Question: What Is The Purpose Of Threshold In Appender?

Appender can have a threshold level associated with it independent of the logger level.
The Appender ignores any logging messages that have a level lower than the threshold
level.

Docker DevOps Interview Questions

Question: What is Docker?


59/71
Docker provides a container for managing software workloads on shared infrastructure, all
while keeping them isolated from one another.

Docker is a tool designed to make it easier to create, deploy, and run applications by using
containers.

Containers allow a developer to package up an application with all of the parts it needs,
such as libraries and other dependencies, and ship it all out as one package.

By doing so, the developer can rest assured that the application will run on any other Linux
machine regardless of any customized settings that machine might have that could differ
from the machine used for writing and testing the code. In a way, Docker is a bit like a
virtual machine. But unlike a virtual machine, rather than creating a whole virtual operating
system. Docker allows applications to use the same Linux kernel as the system that they're
running on and only requires applications be shipped with things not already running on the
host computer. This gives a significant performance boost and reduces the size of the
application.

Question: What Are Linux Containers?

Linux containers, in short, contain applications in a way that keep them isolated from the
host system that they run on.

Containers allow a developer to package up an application with all of the parts it needs,
such as libraries and other dependencies, and ship it all out as one package.

And they are designed to make it easier to provide a consistent experience as developers
and system administrators move code from development environments into production in a
fast and replicable way.

Question: Who Is Docker For?

Docker is a tool that is designed to benefit both developers and system administrators,
making it a part of many DevOps (developers + operations) toolchains.

For developers, it means that they can focus on writing code without worrying about
the system that it will ultimately be running on.

It also allows them to get a head start by using one of thousands of programs already
designed to run in a Docker container as a part of their application.

For operations staff, Docker gives flexibility and potentially reduces the number of systems
needed because of its small footprint and lower overhead.

60/71
Question: What Is Docker Container?

Docker containers include the application and all of its dependencies, but share the kernel
with other containers, running as isolated processes in user space on the host operating
system.

Docker containers are not tied to any specific infrastructure: they run on any computer, on
any infrastructure, and in any cloud.

Now explain how to create a Docker container, Docker containers can be created by either
creating a Docker image and then running it or you can use Docker images that are present
on the Dockerhub. Docker containers are basically runtime instances of Docker images.

Question: What Is Docker Image?

Docker image is the source of Docker container. In other words, Docker images are used
to create containers.

Images are created with the build command, and they’ll produce a container when started
with run.

Images are stored in a Docker registry such as registry.hub.docker.com because they can
become quite large, images are designed to be composed of layers of other images,
allowing a minimal amount of data to be sent when transferring images over the network.

Question: What Is Docker Hub?

Docker hub is a cloud-based registry service which allows you to link to code repositories,
build your images and test them, stores manually pushed images, and links to Docker
cloud so you can deploy images to your hosts.

It provides a centralized resource for container image discovery, distribution and change
management, user and team collaboration, and workflow automation throughout the
development pipeline.

Question: What is Docker Swarm?

Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single,
virtual Docker host.

Docker Swarm serves the standard Docker API, any tool that already communicates with a
Docker daemon can use Swarm to transparently scale to multiple hosts.

61/71
I will also suggest you to include some supported tools:

Dokku
Docker Compose
Docker Machine
Jenkins

Questions: What is Dockerfile used for?

A Dockerfile is a text document that contains all the commands a user could call on the
command line to assemble an image.

Using docker build users can create an automated build that executes several command-
line instructions in succession.

Question: How is Docker different from other container technologies?

Docker containers are easy to deploy in a cloud. It can get more applications running on the
same hardware than other technologies.

It makes it easy for developers to quickly create, ready-to-run containerized applications


and it makes managing and deploying applications much easier. You can even share
containers with your applications.

Question: How to create Docker container?

We can use Docker image to create Docker container by using the below command:

1 docker run -t -i command


name

This command will create and start a container. You should also add, If you want to check
the list of all running container with the status on a host use the below command:

1 docker ps -
a

Question: How to stop and restart the Docker container?

In order to stop the Docker container you can use the below command:

1 docker stop container


ID

Now to restart the Docker container you can use:


62/71
1 docker restart container
ID

Question: What is the difference between docker run and docker create?

The primary difference is that using ‘docker create’ creates a container in a stopped state.
Bonus point: You can use ‘docker create’ and store an outputed container ID for later
use. The best way to do it is to use ‘docker run’ with --cidfile FILE_NAME as running it
again won’t allow to overwrite the file.

Question: What four states a Docker container can be in?

Running
Paused
Restarting
Exited

Question:What Is Difference Between Repository and a Registry?

Docker registry is a service for hosting and distributing images. Docker repository is a
collection of related Docker images.

Question: How to link containers?

The simplest way is to use network port mapping. There’s also the - -link flag which is
deprecated.

Question: What is the difference between Docker RUN, CMD and


ENTRYPOINT?

A CMD does not execute anything at build time, but specifies the intended command for
the image.
RUN actually runs a command and commits the result.
If you would like your container to run the same executable every time, then you should
consider using ENTRYPOINT in combination with CMD.

Question: How many containers can run per host?

As far as the number of containers that can be run, this really depends on your

63/71
environment. The size of your applications as well as the amount of available resources will
all affect the number of containers that can be run in your environment.
Containers unfortunately are not magical. They can’t create new CPU from scratch. They
do, however, provide a more efficient way of utilizing your resources.
The containers themselves are super lightweight (remember, shared OS vs individual OS
per container) and only last as long as the process they are running. Immutable
infrastructure if you will.

Question: What is Docker hub?

Docker hub is a cloud-based registry service which allows you to link to code repositories,
build your images and test them, stores manually pushed images, and links to Docker
cloud so you can deploy images to your hosts.
It provides a centralized resource for container image discovery, distribution and change
management, user and team collaboration, and workflow automation throughout the
development pipeline.

VmWare DevOps Interview Questions

Question: What is VmWare?

VMware was founded in 1998 by five different IT experts. The company officially launched
its first product, VMware Workstation, in 1999, which was followed by the VMware GSX
Server in 2001. The company has launched many additional products since that time.
VMware's desktop software is compatible with all major OSs, including Linux, Microsoft
Windows, and Mac OS X. VMware provides three different types of desktop software:

VMware Workstation: This application is used to install and run multiple copies or
instances of the same operating systems or different operating systems on a single
physical computer machine.
VMware Fusion: This product was designed for Mac users and provides extra
compatibility with all other VMware products and applications.
VMware Player: This product was launched as freeware by VMware for users who do
not have licensed VMWare products. This product is intended only for personel use.

VMware's software hypervisors intended for servers are bare-metal embedded hypervisors
that can run directly on the server hardware without the need of an extra primary OS.
VMware’s line of server software includes:
VMware ESX Server: This is an enterprise-level solution, which is built to provide
better functionality in comparison to the freeware VMware Server resulting from a
lesser system overhead. VMware ESX is integrated with VMware vCenter that
provides additional solutions to improve the manageability and consistency of the
server implementation.
VMware ESXi Server: This server is similar to the ESX Server except that the service
64/71
console is replaced with BusyBox installation and it requires very low disk space to
operate.
VMware Server: Freeware software that can be used over existing operating systems
like Linux or Microsoft Windows.

Question: What is Virtualization?

The process of creating virtual versions of physical components i-e Servers, Storage
Devices, Network Devices on a physical host is called virtualization.

Virtualization lets you run multiple virtual machines on a single physical machine which is
called ESXi host.

Question: What are different types of virtualization?

There are 5 basic types of virtualization

Server virtualization: consolidates the physical server and multiple OS can be run on
a single server.
Network Virtualization: Provides complete reproduction of physical network into a
software defined network.
Storage Virtualization: Provides an abstraction layer for physical storage resources to
manage and optimize in virtual deployment.
Application Virtualization: increased mobility of applications and allows migration of
VMs from host on another with minimal downtime.
Desktop Virtualization: virtualize desktop to reduce cost and increase service

Question: What is Service Console?

The service console is developed based up on Redhat Linux Operating system, it is used
to manage the VMKernel

Question: What is vCenter Agent?

VC agent is an agent installed on ESX server which enables communication between VC


and ESX server.

This Agent will be installed on ESX/ESXi will be done when you try to add the ESx host
in Vcenter.

Question: What is VMKernel?

65/71
VMWare Kernel is a Proprietary kernel of vmware and is not based on any of the flavors of
Linux operating systems.

VMkernel requires an operating system to boot and manage the kernel. A service console
is being provided when VMWare kernel is booted.

Only service console is based up on Redhat Linux OS not VMkernel.

Question:What is VMKernel and why it is important?

VMkernel is a virtualization interface between a Virtual Machine and the ESXi host which
stores VMs.

It is responsible to allocate all available resources of ESXi host to VMs such as memory,
CPU, storage etc.

It’s also control special services such as vMotion, Fault tolerance, NFS, traffic management
and iSCSI.

To access these services, VMkernel port can be configured on ESXi server using a
standard or distributed vSwitch. Without VMkernel, hosted VMs cannot communicate with
ESXi server.

Question: What is hypervisor and its types?

Hypervisor is a virtualization layer that enables multiple operating systems to share a single
hardware host.

Each operating system or VM is allocated physical resources such as memory, CPU,


storage etc by the host. There are two types of hypervisors

Hosted hypervisor (works as application i-e VMware Workstation)


Bare-metal (is virtualization software i-e VMvisor, hyper-V which is installed directly
onto the hardware and controls all physical resources).

Questions: What is virtual networking?

A network of VMs running on a physical server that are connected logically with each other
is called virtual networking.

Question: What is vSS?

vSS stands for Virtual Standard Switch is responsible for communication of VMs hosted on
a single physical host.
66/71
it works like a physical switch automatically detects a VM which want to communicate with
other VM on a same physical server.

Question: What is VMKernal adapter and why it used?

AVMKernel adapter provides network connectivity to the ESXi host to handle network traffic
for vMotion, IP Storage, NAS, Fault Tolerance, and vSAN.

For each type of traffic such as vMotion, vSAN etc. separate VMKernal adapter should be
created and configured.

Question:What are three port groups are configured in ESXi


networking?

Virtual Machine Port Group – Used for Virtual Machine Network


Service Console Port Group – Used for Service Console Communications
VMKernel Port Group – Used for VMotion, iSCSI, NFS Communications

Question: What are main components of vCenter Server architecture?

There are three main components of vCenter Server architecture.


vSphere Client and Web Client: a user interface.
vCenter Server database: SQL server or embedded PostgreSQL to store inventory,
security roles, resource pools etc.
SSO: a security domain in virtual environment

Question: What is datastore?

A Datastore is a storage location where virtual machine files are stored and accessed.
Datastore is based on a file system which is called VMFS, NFS

Question: How many disk types are in VMware?

There are three disk types in vSphere.


1. Thick Provisioned Lazy Zeroes: every virtual disk is created by default in this disk
format. Physical space is allocated to a VM when virtual disk is created. It can’t be
converted to thin disk.
2. Thick Provision Eager Zeroes: this disk type is used in VMware Fault Tolerance. All
required disk space is allocated to a VM at time of creation. It takes more time to
create a virtual disk compare to other disk formats.

67/71
3. Thin provision: It provides on-demand allocation of disk space to a VM. When data
size grows, the size of disk will grow. Storage capacity utilization can be up to 100%
with thin provisioning.
4. What is Storage vMotion?

It is similar to traditional vMotion, in Storage vMotion, virtual disk of a VM is moved from


datastore to another. During Storage vMotion, virtual disk types think provisioning disk can
be transformed to thin provisioned disk.

Question: What is the use of VMKernel Port ?

Vmkernel port is used by ESX/ESXi for vmotion, ISCSI & NFS communications. ESXi uses
Vmkernel as the management network since it don’t have serviceconsole built with it.

Question: What are different types of Partitions in ESX server?

AC/-root Swap /var /Var/core /opt /home /tmp

Question: Explain What Is VMware DRS?

VMware DRS stands for Distributed Resource Scheduler; it dynamically balances


resources across various host under cluster or resource pool. It enables users to determine
the rules and policies which decide how virtual machines deploy resources, and these
resources should be prioritized to multiple virtual machines.

DevOps Testing Interview Questions

Question: What is Continuous Testing?

Continuous Testing is the process of executing automated tests to obtain immediate


feedback on the business risks associated with in the latest build.

In this way, each build is tested continuously, allowing Development teams to get fast
feedback so that they can prevent those problems from progressing to the next stage of
Software delivery life-cycle.

Question: What is Automation Testing

Automation testing is a process of automating the manual testing process. Automation


testing involves use of separate testing tools, which can be executed repeatedly and
68/71
doesn’t require any manual intervention.

Question: What Are The Benefits of Automation Testing?

Here are some of the benefits of using Continuous Testing;

Supports execution of repeated test cases


Aids in testing a large test matrix
Enables parallel execution
Encourages unattended execution
Improves accuracy thereby reducing human generated errors
Saves time and money

Question: Why is Continuous Testing important for


DevOps?

Continuous Testing allows any change made in the code to be tested immediately.

This avoids the problems created by having “big-bang” testing left to the end of the
development cycle such as release delays and quality issues.

In this way, Continuous Testing facilitates more frequent and good quality releases.”

Question: What are the Testing types supported by Selenium?

Selenium supports two types of testing:

Regression Testing: It is the act of retesting a product around an area where a bug was
fixed.

Functional Testing: It refers to the testing of software features (functional points)


individually.

Question: What is the Difference Between Assert and


Verify commands in Selenium?

Assert command checks whether the given condition is true or false.

Verify command also checks whether the given condition is true or false. Irrespective of
the condition being true or false, the program execution doesn’t halts i.e. any failure during
verification would not stop the execution and all the test steps would be executed.

Summary
69/71
DevOps refers to a wide range of tools, process and practices used bycompanies to
improve their build, deployment, testing and release life cycles.

In order to ace a DevOps interview you need to have a deep understanding of all of these
tools and processes.

Most of the technologies and process used to implement DevOps are not isolated. Most
probably you are already familiar with many of these. All you have to do is to prepare for
these from DevOps perspective.

In this guide I have created the largest set of interview questions. Each section in this guide
caters to a specific area of DevOps.

In order to increase your chances of success in DevOps interview you need to go through
all of these questions.

Other Related Interview Questions:

- AngularJs Interview Questions

- Spring Interview Questions

- Java MultiThreading Interview Questions

- Interview Questions

- Phone Interview Questions

DevOps Interview Questions PDF

Spring-Interview-Questions

About The Author

References
https://theagileadmin.com/what-is-devops/
https://en.wikipedia.org/wiki/DevOps
http://www.javainuse.com/misc/gradle-interview-questions
https://mindmajix.com/gradle-interview-questions
https://tekslate.com/groovy-interview-questions-and-answers/
https://mindmajix.com/groovy-interview-questions
https://www.wisdomjobs.com/e-university/groovy-programming-language-interview-
questions.html
https://www.quora.com/What-are-some-advantages-of-the-Groovy-programming-
language
70/71
https://www.quora.com/What-are-some-advantages-of-the-Groovy-programming-
language
http://groovy-lang.org/documentation.html
https://maven.apache.org/guides/introduction/introduction-to-archetypes.html
https://en.wikipedia.org/wiki/Apache_Maven
https://www.tecmint.com/linux-process-management/
https://www.tecmint.com/dstat-monitor-linux-server-performance-process-memory-
network/
https://www.careerride.com/Linux-Interview-Questions.aspx
https://www.onlineinterviewquestions.com/git-interview-questions/#.WxcTP9WFMy4
https://www.atlassian.com/git/tutorials/what-is-git
https://www.toptal.com/git/interview-questions
https://www.sbf5.com/~cduan/technical/git/git-1.shtml
http://preparationforinterview.com/preparationforinterview/continuous-integration-
interview-question
https://codingcompiler.com/jenkins-interview-questions-answers/
https://www.edureka.co/blog/interview-questions/top-splunk-interview-questions-and-
answers/
https://intellipaat.com/interview-question/splunk-interview-questions/
https://www.edureka.co/blog/interview-questions/docker-interview-questions/
http://www.vmwarearena.com/vmware-interview-questions-and-answers/
https://www.myvirtualjourney.com/top-80-vmware-interview-questions-answers/
https://www.edureka.co/blog/interview-questions/top-devops-interview-questions-
2016/

71/71

You might also like