You are on page 1of 24

Unit – V

UNIT V PROJECT MANAGEMENT 9


Software Project Management- Software Configuration Management - Project Scheduling-
DevOps: Motivation-Cloud as a platform-Operations- Deployment Pipeline: Overall
Architecture Building and Testing-Deployment- Tools- Case Study

1.Software Project Management

Software Project Management (SPM) is a proper way of planning and leading software projects.
It is a part of project management in which software projects are planned, implemented,
monitored, and controlled. This article focuses on discussing Software Project Management
(SPM).
Need for Software Project Management
Software is a non-physical product. Software development is a new stream in business and there
is very little experience in building software products. Most of the software products are made to
fit clients’ requirements. The most important is that basic technology changes and advances so
frequently and rapidly that the experience of one product may not be applied to the other one.
Such types of business and environmental constraints increase risk in software development hence
it is essential to manage software projects efficiently. It is necessary for an organization to deliver
quality products, keep the cost within the client’s budget constraint, and deliver the project as per
schedule. Hence, in order, software project management is necessary to incorporate user
requirements along with budget and time constraints.

Types of Management in SPM


1. Conflict Management
Conflict management is the process to restrict the negative features of conflict while increasing
the positive features of conflict. The goal of conflict management is to improve learning and group
results including efficacy or performance in an organizational setting. Properly managed conflict
can enhance group results.
2. Risk Management
Risk management is the analysis and identification of risks that is followed by synchronized and
economical implementation of resources to minimize, operate and control the possibility or effect
of unfortunate events or to maximize the realization of opportunities.
3. Requirement Management
It is the process of analyzing, prioritizing, tracking, and documenting requirements and then
supervising change and communicating to pertinent stakeholders. It is a continuous process during
a project.
4. Change Management
Change management is a systematic approach to dealing with the transition or transformation of
an organization’s goals, processes, or technologies. The purpose of change management is to
execute strategies for effecting change, controlling change, and helping people to adapt to change.
5. Software Configuration Management
Software configuration management is the process of controlling and tracking changes in the
software, part of the larger cross-disciplinary field of configuration management. Software
configuration management includes revision control and the inauguration of baselines.
6. Release Management
Release Management is the task of planning, controlling, and scheduling the built-in deploying
releases. Release management ensures that the organization delivers new and enhanced services
required by the customer while protecting the integrity of existing services.
Aspects of Software Project Management
The list of focus areas it can tackle and the broad upsides of Software Project Management is:
1. Planning
The software project manager lays out the complete project’s blueprint. The project plan will
outline the scope, resources, timelines, techniques, strategy, communication, testing, and
maintenance steps. SPM can aid greatly here.
2. Leading
A software project manager brings together and leads a team of engineers, strategists,
programmers, designers, and data scientists. Leading a team necessitates exceptional
communication, interpersonal, and leadership abilities. One can only hope to do this effectively
if one sticks with the core SPM principles.
3. Execution
SPM comes to the rescue here also as the person in charge of software projects (if well versed
with SPM/Agile methodologies) will ensure that each stage of the project is completed
successfully. measuring progress, monitoring to check how teams function, and generating status
reports are all part of this process.
4. Time Management
Abiding by a timeline is crucial to completing deliverables successfully. This is especially
difficult when managing software projects because changes to the original project charter are
unavoidable over time. To assure progress in the face of blockages or changes, software project
managers ought to be specialists in managing risk and emergency preparedness.
5. Budget
Software project managers, like conventional project managers, are responsible for generating a
project budget and adhering to it as closely as feasible, regulating spending, and reassigning funds
as needed. SPM teaches us how to effectively manage the monetary aspect of projects to avoid
running into a financial crunch later on in the project.
6. Maintenance
Software project management emphasizes continuous product testing to find and repair defects
early, tailor the end product to the needs of the client, and keep the project on track. The software
project manager makes ensuring that the product is thoroughly tested, analyzed, and adjusted as
needed. Another point in favor of SPM.
Aspects of Project Management

Downsides of Software Project Management

Numerous issues can develop if a Software project manager lacks the necessary expertise or
knowledge. Software Project management has several drawbacks, including resource loss,
scheduling difficulty, data protection concerns, and interpersonal conflicts between
Developers/Engineers/Stakeholders. Furthermore, outsourcing work or recruiting
additional personnel to complete the project may result in hefty costs for one’s company.

1. Costs are High


Consider spending money on various kinds of project management tools, software, & services if
ones engage in Software Project Management strategies. These initiatives can be expensive and
time-consuming to put in place. Because your team will be using them as well, they may require
training. One may need to recruit subject-matter experts or specialists to assist with a project,
depending on the circumstances. Stakeholders will frequently press for the inclusion of features
that were not originally envisioned. All of these factors can quickly drive up a project’s cost.

2. Complexity will be increased


Software Project management is a multi-stage, complex process. Unfortunately, some specialists
might have a propensity to overcomplicate everything, which can lead to confusion among teams
and lead to delays in project completion. Their expressions are very strong and specific in their
ideas, resulting in a difficult work atmosphere. Projects having a larger scope are typically more
arduous to complete, especially if there isn’t a dedicated team committed completely to the
project. Members of cross-functional teams may lag far behind their daily tasks, adding to the
overall complexity of the project being worked on.

3. Overhead in Communication
Recruits enter your organization when we hire software project management personnel. This
provides a steady flow of communication that may or may not match a company’s culture. As a
result, it is advised that you maintain your crew as
small as feasible. The communication overhead tends to skyrocket when a team becomes large
enough. When a large team is needed for a project, it’s critical to identify software project
managers who can conduct effective communication with a variety of people.
4. Lack of Originality
Software Project managers can sometimes provide little or no space for creativity. Team leaders
either place an excessive amount of emphasis on management processes or impose hard deadlines
on their employees, requiring them to develop and operate code within stringent guidelines. This
can stifle innovative thought and innovation that could be beneficial to the project. When it comes
to Software project management, knowing when to encourage creativity and when to stick to the
project plan is crucial. Without Software project management personnel, an organization can
perhaps build and ship code more quickly. However, employing a trained specialist to handle
these areas, on the other hand, can open up new doors and help the organization achieve its
objectives more quickly and more thoroughly.

2. Software Configuration Management


Whenever software is built, there is always scope for improvement and those improvements bring
picture changes. Changes may be required to modify or update any existing solution or to create
a new solution for a problem. Requirements keep on changing daily so we need to keep on
upgrading our systems based on the current requirements and needs to meet desired outputs.
Changes should be analyzed before they are made to the existing system, recorded before they are
implemented, reported to have details of before and after, and controlled in a manner that will
improve quality and reduce error. This is where the need for System Configuration Management
comes. System Configuration Management (SCM) is an arrangement of exercises that controls
change by recognizing the items for change, setting up connections between those things,
making/characterizing instruments for overseeing diverse variants, controlling the changes being
executed in the current framework, inspecting and revealing/reporting on the changes made. It is
essential to control the changes because if the changes are not checked legitimately then they may
wind up undermining a well-run programming. In this way, SCM is a fundamental piece of all
project management activities.
Processes involved in SCM – Configuration management provides a disciplined environment for
smooth control of work products. It involves the following activities:
1. Identification and Establishment – Identifying the configuration items from products that
compose baselines at given points in time (a baseline is a set of mutually consistent
Configuration Items, which has been formally reviewed and agreed upon, and serves
as the basis of further development). Establishing relationships among items, creating
a mechanism to manage multiple levels of control and procedure for the change
management system.
2. Version control – Creating versions/specifications of the existing product to build new
products with the help of the SCM system. A description of the version is given below:
Suppose after some changes,
the version of the configuration object changes from 1.0 to 1.1. Minor corrections and
changes result in versions 1.1.1 and 1.1.2, which is followed by a major update that is
object 1.2. The development of object 1.0 continues through 1.3 and 1.4, but finally, a
noteworthy change to the object results in a new evolutionary path, version 2.0. Both
versions are currently supported.

3. Change control – Controlling changes to Configuration items (CI). The change control process
is explained in Figure below:A change request (CR) is submitted and evaluated to assess
technical merit, potential side effects, the overall impact on other configuration objects and
system functions, and the projected cost of the change. The results of the evaluation are
presented as a change report, which is used by a change control board (CCB) —a person or
group who makes a final decision on the status and priority of the change. An engineering
change Request (ECR) is generated for each approved change. Also, CCB notifies the
developer in case the change is rejected with proper reason. The ECR describes the change to
be made, the constraints that must be respected, and the criteria for review and audit. The
object to be changed is “checked out” of the project database, the change is made, and then
the object is tested again. The object is then “checked in” to the database and appropriate
version control mechanisms are used to create the next version of the software.
4. Configuration auditing – A software configuration audit complements the formal technical
review of the process and product. It focuses on the technical correctness of the configuration
object that has been modified. The audit confirms the completeness, correctness, and
consistency of items in the SCM system and tracks action items from the audit to closure.
5. Reporting – Providing accurate status and current configuration data to developers, testers,
end users, customers, and stakeholders through admin guides, user guides, FAQs, Release
notes, Memos, Installation Guide, Configuration guides, etc.

System Configuration Management (SCM) is a software engineering practice that focuses on


managing the configuration of software systems and ensuring that software components are
properly controlled, tracked, and stored. It is a critical aspect of software development, as it helps
to ensure that changes made to a software system are properly coordinated and that the system is
always in a known and stable state.
SCM involves a set of processes and tools that help to manage the different components of a
software system, including source code, documentation, and other assets. It enables teams to track
changes made to the software system, identify when and why changes were made, and manage
the integration of these changes into the final product.
Importance of Software Configuration Management
1. Effective Bug Tracking: Linking code modifications to issues that have been reported,
makes bug tracking more effective.
2. Continuous Deployment and Integration: SCM combines with continuous processes
to automate deployment and testing, resulting in more dependable and timely software
delivery.
3. Risk management: SCM lowers the chance of introducing critical flaws by assisting
in the early detection and correction of problems.
4. Support for Big Projects: Source Code Control (SCM) offers an orderly method to
handle code modifications for big projects, fostering a well-organized development
process.
5. Reproducibility: By recording precise versions of code, libraries, and dependencies,
source code versioning (SCM) makes builds repeatable.
6. Parallel Development: SCM facilitates parallel development by enabling several
developers to collaborate on various branches at once.

Why need for System configuration management?


1. Replicability: Software version control (SCM) makes ensures that a software system
can be replicated at any stage of its development. This is necessary for testing,
debugging, and upholding consistent environments in production, testing, and
development.
2. Identification of Configuration: Source code, documentation, and executable files are
examples of configuration elements that SCM helps in locating and labeling. The
management of a system’s constituent parts and their interactions depend on this
identification.
3. Effective Process of Development: By automating monotonous processes like managing
dependencies, merging changes, and resolving disputes, SCM simplifies the development
process. Error risk is decreased and efficiency is increased because of this automation.
Key objectives of SCM
1. Control the evolution of software systems: SCM helps to ensure that changes to a software
system are properly planned, tested, and integrated into the final product.
2. Enable collaboration and coordination: SCM helps teams to collaborate and coordinate their
work, ensuring that changes are properly integrated and that everyone is working from the
same version of the software system.
3. Provide version control: SCM provides version control for software systems, enabling teams to
manage and track different versions of the system and to revert to earlier versions if necessary.
4. Facilitate replication and distribution: SCM helps to ensure that software systems can be easily
replicated and distributed to other environments, such as test, production, and customer sites.
5. SCM is a critical component of software development, and effective SCM practices can help
to improve the quality and reliability of software systems, as well as increase efficiency and
reduce the risk of errors.
The main advantages of SCM
1. Improved productivity and efficiency by reducing the time and effort required to
manage software changes.
2. Reduced risk of errors and defects by ensuring that all changes were properly tested
and validated.
3. Increased collaboration and communication among team members by providing a
central repository for software artifacts.
4. Improved quality and stability of software systems by ensuring that all changes are
properly controlled and managed.
The main disadvantages of SCM
1. Increased complexity and overhead, particularly in large software systems.
2. Difficulty in managing dependencies and ensuring that all changes are properly
integrated.
3. Potential for conflicts and delays, particularly in large development teams with
multiple contributors.

3. Project Scheduling

A schedule in your project’s time table actually consists of sequenced activities and milestones
that are needed to be delivered under a given period of time. Project schedule simply means a
mechanism that is used to communicate and know about that tasks are needed and has to be
done or performed and which organizational resources will be given or allocated to these tasks
and in what time duration or time frame work is needed to be performed. Effective project
scheduling leads to success of project, reduced cost, and increased customer satisfaction.
Scheduling in project management means to list out activities, deliverables, and milestones
within a project that are delivered. It contains more notes than your average weekly planner
notes. The most common and important form of project schedule is Gantt chart.
Process
: The manager needs to estimate time and resources of project while scheduling project. All
activities in project must be arranged in a coherent sequence that means activities should be
arranged in a logical and well-organized manner for easy to understand. Initial estimates of
project can be made optimistically which means estimates can be made when all favorable things
will happen and no threats or problems take place. The total work is separated or divided into
various small activities or tasks during project schedule. Then, Project manager will decide time
required for each activity or task to get completed. Even some activities are conducted and
performed in parallel for efficient performance. The project manager should be aware of fact
that each stage of project is not problem-free.

Problems arise during Project Development Stage :


• People may leave or remain absent during particular stage of development.
• Hardware may get failed while performing.
• Software resource that is required may not be available at present, etc.
The project schedule is represented as set of chart in which work-breakdown structure and
dependencies within various activities are represented. To accomplish and complete project
within a given schedule, required resources must be available when they are needed. Therefore,
resource estimation should be done before starting development. Resources required for
Development of Project :
• Human effort
• Sufficient disk space on server
• Specialized hardware
• Software technology
• Travel allowance required by project staff, etc.
Advantages of Project Scheduling : There are several advantages provided by project schedule
in our project management:
• It simply ensures that everyone remains on same page as far as tasks get completed,
dependencies, and deadlines.
• It helps in identifying issues early and concerns such as lack or unavailability of
resources.
• It also helps to identify relationships and to monitor process.
• It provides effective budget management and risk mitigation.

4. DevOps
DevOps is a software development methodology that improves the collaboration between
developers and operations teams using various automation tools. These automation tools are
implemented using various stages which are a part of the DevOps Lifecycle.
Goal: The goal of DevOps is to increase an organization speed when it comes to delivering
applications and services. Many companies have successfully implemented devOps to enhance
their user experience like amazon, netflix etc.
Example:
Facebook’s mobile app which is updated every two weeks effectively telling users you can have
what you want and you can have it. Now ever wondered how facebook was able to do the social
smoothing ? It’s the DevOps philosophy that helps facebook and sure that apps aren’t outdated
and that users get the best experience Facebook. Facebook accomplishes this true of code
ownership model that makes its developers responsible that includes testing and supporting
through production and delivery for each kernel of code. They write and update its true policies
like this but Facebook has developed a DevOps culture and has successfully accelerated its
development lifecycle.
Industries have started to gear up for the digital transformation by shifting their means to weeks
and months instead of years while maintaining high quality as a result. The solution to all this
is- DevOps.

How DevOps Works?

The DevOps Lifecycle divides the SDLC lifecycle into the following stages:

Automated CI/CD Pipeline


1. Continuous Development:
This stage involves committing code to version control tools such as Git or SVN for maintaining
the different versions of the code, and tools like Ant, Maven, Gradle for building/packaging the
code into an executable file that can be forwarded to the QAs for testing.
2. Continuous Integration:
The stage is a critical point in the whole DevOps Lifecycle. It deals with integrating the different
stages of the DevOps lifecycle and is, therefore, the key in automating the whole DevOps
Process.
3. Continuous Deployment:
In this stage the code is built, the environment or the application is containerized and is pushed
onto the desired server. The key processes in this stage are Configuration Management,
Virtualization, and Containerization.
4. Continuous Testing:
The stage deals with automated testing of the application pushed by the developer. If there is an
error, the message is sent back to the integration tool, this tool, in turn, notifies the developer of
the error, If the test was a success, the message is sent to Integration-tool which pushes the build
on the production server.
5. Continuous Monitoring:
The stage continuously monitors the deployed application for bugs or crashes. It can also be set
up to collect user feedback. The collected data is then sent to the developers to improve the
application.
Lost in the complex landscape of DevOps? It's time to find your way! Enroll in our DevOps
Engineering Planning to Production Live Course and set out on an exhilarating expedition to
conquer DevOps methodologies with precision and timeliness.

5.Motivation
The motivation behind DevOps stems from the need to address challenges and inefficiencies
in traditional software development and IT operations practices. Several key factors have
driven the adoption of DevOps practices and principles:

1. Increasing Complexity and Scale: Modern software systems are becoming


increasingly complex, distributed, and interconnected. Traditional development and
operations practices struggle to keep pace with the growing complexity and scale of
software deployments, leading to inefficiencies, delays, and errors.
2. Faster Time-to-Market: In today's competitive business landscape, organizations
strive to deliver software products and updates to market quickly and efficiently.
DevOps practices, such as continuous integration, continuous delivery, and automation,
enable faster time-to-market by streamlining the software delivery pipeline and
reducing manual overhead.
3. Continuous Feedback and Improvement: DevOps promotes a culture of continuous
feedback and improvement, where development, operations, and other stakeholders
collaborate closely to identify bottlenecks, address issues, and optimize processes
iteratively. By incorporating feedback loops into the software development lifecycle,
DevOps enables organizations to respond to changing requirements and user needs
more effectively.
4. Enhanced Collaboration and Communication: Traditional silos between
development and operations teams can lead to misalignment, misunderstandings, and
delays in the software delivery process. DevOps emphasizes collaboration,
transparency, and shared accountability across teams, fostering a culture of trust,
communication, and teamwork.
5. Automation and Efficiency: Manual, repetitive tasks in software development, testing,
deployment, and operations can be error-prone, time-consuming, and resource-
intensive. DevOps advocates for automation of routine tasks, allowing teams to focus
on higher-value activities, such as innovation, problem-solving, and customer
engagement.
6. Resilience and Reliability: With the increasing frequency and impact of software
failures, organizations prioritize resilience, reliability, and fault tolerance in their
systems. DevOps practices, such as infrastructure as code, automated testing, and
continuous monitoring, help organizations build more resilient, reliable, and scalable
software systems that can withstand failures and recover quickly from disruptions.
7. Cloud Computing and DevOps Culture: The rise of cloud computing platforms, such
as AWS, Azure, and Google Cloud, has transformed the way organizations build,
deploy, and manage software applications. Cloud-native architectures and DevOps
principles go hand in hand, enabling organizations to leverage the scalability,
flexibility, and agility of cloud infrastructure to accelerate software delivery and
innovation.

In summary, the motivation behind DevOps lies in the need for organizations to adapt to the
evolving demands of the digital economy by embracing collaborative, agile, and automated
approaches to software development, deployment, and operations. By adopting DevOps
practices and principles, organizations can achieve greater efficiency, resilience, and
competitiveness in today's fast-paced and dynamic technology landscape.

6. Cloud as a platform
The cloud has revolutionized the landscape of software engineering by providing a powerful and flexible
platform for developing, deploying, and managing software applications. As a platform, the cloud offers
numerous benefits and capabilities that enable organizations to innovate, scale, and optimize their
software engineering processes. Here are some key aspects of the cloud as a platform in software
engineering:

1. Infrastructure as a Service (IaaS):


Cloud providers offer infrastructure resources such as virtual machines, storage, and networking as on-
demand services. This enables software engineers to provision, configure, and scale infrastructure
resources dynamically based on application requirements without the need to manage physical hardware.

2. Platform as a Service (PaaS):


PaaS offerings provide a higher level of abstraction by abstracting away the underlying infrastructure and
providing development and deployment environments for building and deploying applications. PaaS
platforms typically include tools, frameworks, and services for application development, database
management, and integration, enabling developers to focus on building and enhancing application
functionality without managing the underlying infrastructure.

3. Scalability and Elasticity:


Cloud platforms offer scalability and elasticity, allowing software applications to scale up or down
dynamically in response to changing demand. This enables organizations to handle spikes in traffic,
accommodate growing user bases, and optimize resource utilization, ensuring optimal performance and
cost efficiency.

4. Flexibility and Agility:


The cloud provides a flexible and agile platform for software development, enabling rapid prototyping,
experimentation, and iteration. Developers can quickly provision development and test environments,
leverage pre-built services and APIs, and integrate third-party components to accelerate development
cycles and time-to-market.

5. Cost Efficiency:
Cloud platforms offer a pay-as-you-go pricing model, where organizations pay only for the resources and
services they consume. This eliminates the need for upfront capital investments in hardware and
infrastructure and allows organizations to scale resources based on actual usage and demand, optimizing
costs and resource utilization.

6. Global Reach and Accessibility:


Cloud providers operate data centers and regions around the world, enabling organizations to deploy and
distribute software applications globally with low latency and high availability. This global reach and
accessibility facilitate geographic expansion, improve user experience, and support disaster recovery and
business continuity planning.

7. Security and Compliance:


Cloud providers invest heavily in security and compliance measures to protect customer data,
infrastructure, and applications. Cloud platforms offer built-in security features such as encryption,
identity and access management, and network security controls, helping organizations maintain data
privacy, integrity, and compliance with industry regulations and standards.

8. DevOps and Continuous Delivery:


The cloud facilitates DevOps practices and continuous delivery by providing automation tools,
infrastructure-as-code frameworks, and integrated development and deployment pipelines. Organizations
can leverage cloud-native services and DevOps practices to automate software delivery, monitor
application performance, and iterate rapidly based on user feedback.

In summary, the cloud as a platform has transformed software engineering by providing a scalable,
flexible, and cost-effective environment for building, deploying, and managing software applications. By
leveraging cloud services, organizations can accelerate innovation, improve agility, and deliver superior
experiences to users while optimizing costs and mitigating risks in today's dynamic and competitive
market landscape.

7. Cloud as a platform-Operations

. In the context of cloud computing, operations refer to the management, monitoring, and maintenance of
cloud-based infrastructure, applications, and services. Cloud operations encompass a wide range of
activities aimed at ensuring the availability, performance, security, and efficiency of cloud environments.
Here are some key aspects of cloud operations as a platform:

1. Provisioning and Deployment:


Cloud operations involve provisioning and deploying infrastructure resources, virtual machines,
containers, and applications on cloud platforms. Operations teams use automation tools and frameworks
to provision resources, configure environments, and deploy software applications efficiently and
consistently.

2. Configuration Management:
Cloud operations teams manage and maintain configuration settings, parameters, and policies across cloud
environments. They use configuration management tools to automate the configuration of servers,
networking, security settings, and other infrastructure components, ensuring consistency and compliance
with organizational standards and policies.

3. Monitoring and Alerting:


Monitoring and alerting are essential aspects of cloud operations for detecting and responding to issues,
anomalies, and performance degradation in real-time. Operations teams use monitoring tools and
dashboards to monitor resource utilization, application performance, and service availability, and set up
alerts and notifications to proactively identify and address issues before they impact users.

4. Incident Management and Response:


Cloud operations teams are responsible for managing and responding to incidents, outages, and service
disruptions in cloud environments. They follow incident management processes and workflows to triage,
prioritize, and resolve incidents effectively, minimize downtime, and restore services to normal operation
as quickly as possible.

5. Security and Compliance:


Security and compliance are top priorities for cloud operations teams to protect cloud-based infrastructure,
applications, and data from security threats, vulnerabilities, and breaches. Operations teams implement
security best practices, configure security controls, and enforce compliance policies to mitigate risks,
ensure data privacy, and comply with regulatory requirements.

6. Performance Optimization:
Cloud operations teams optimize the performance and efficiency of cloud environments by tuning
configurations, optimizing resource utilization, and scaling resources dynamically based on workload
demand. They identify and address performance bottlenecks, optimize application code, and leverage
caching and content delivery networks (CDNs) to improve response times and user experience.

7. Disaster Recovery and Business Continuity:


Cloud operations teams implement disaster recovery (DR) and business continuity (BC) strategies to
ensure the resilience and availability of cloud-based services in the event of disasters, hardware failures,
or service disruptions. They design and test DR plans, replicate data across multiple regions, and leverage
backup and recovery solutions to minimize data loss and downtime.

8. Cost Management and Optimization:


Cloud operations teams are responsible for managing and optimizing cloud costs to ensure cost-
effectiveness and maximize return on investment (ROI). They analyze cost usage and spending patterns,
implement cost allocation and tagging strategies, and leverage cost management tools to optimize resource
utilization, eliminate waste, and control cloud spending.
In summary, cloud operations as a platform encompass a broad range of activities and responsibilities
aimed at managing, monitoring, and maintaining cloud-based infrastructure, applications, and services.
By adopting best practices, automation, and continuous improvement, cloud operations teams can ensure
the reliability, security, and efficiency of cloud environments while enabling innovation, scalability, and
agility in today's digital landscape

8.Deployment Pipeline
A deployment pipeline in software engineering is a continuous delivery practice that automates the
process of building, testing, and deploying software applications across different environments, typically
from development through to production. It is a series of automated stages that code changes pass through
before being released into production. The primary goal of a deployment pipeline is to ensure that changes
to the codebase are thoroughly tested, validated, and ready for production deployment.

Components of a Deployment Pipeline:


1. Source Control: The pipeline starts with the version control system (e.g., Git) where developers
commit their code changes. The pipeline monitors the version control system for new commits.
2. Continuous Integration (CI)
• Build: The pipeline automatically builds the application from the source code whenever new
changes are detected.
• Unit Tests: Automated unit tests are run against the built application to ensure that new changes
haven't introduced regressions or errors.
3. Automated Testing:
• Integration Tests: After the build stage, the application undergoes integration tests to ensure that
different components work together correctly.
• Acceptance Tests: End-to-end acceptance tests are performed to validate the application's behavior
from a user's perspective.
4. Deployment to Environments:
• Development Environment: After passing tests, the application is deployed to a development
environment for further testing and validation.
• Staging Environment: Once the application passes tests in the development environment, it can
be deployed to a staging environment that closely resembles the production environment.
• Production Environment: Finally, after successful testing in the staging environment, the
application is deployed to the production environment for release to end-users.
Key Principles of Deployment Pipelines:
1. Automation: The deployment pipeline automates the entire process of building, testing, and
deploying software changes, minimizing manual intervention and human error.
2. Consistency: The pipeline ensures consistency in the deployment process across different
environments, reducing the risk of configuration drift and inconsistencies between environments.
3. Visibility and Feedback: The pipeline provides visibility into the status of each stage of the
deployment process, enabling developers and stakeholders to track progress, identify issues, and
receive feedback in real-time.
4. Fast Feedback Loops: By running tests and validations early and often in the pipeline, developers
receive fast feedback on the quality and correctness of their code changes, enabling rapid iteration
and improvement.
5. Reproducibility: The deployment pipeline ensures that the deployment process is repeatable and
reproducible, allowing teams to roll back changes, reproduce issues, and troubleshoot problems
effectively.
Benefits of Deployment Pipelines:
1. Faster Time-to-Market: Deployment pipelines enable organizations to release software changes
more frequently, reliably, and predictably, reducing time-to-market and accelerating innovation.
2. Improved Quality: By automating testing and validation processes, deployment pipelines help
maintain high software quality, reduce defects, and improve overall reliability and stability.
3. Reduced Risk: Deployment pipelines mitigate the risk of introducing errors and regressions into
production environments by providing automated testing and validation at every stage of the
deployment process.
4. Efficiency and Consistency: Deployment pipelines streamline and standardize the deployment
process, increasing efficiency, reducing manual effort, and ensuring consistency across
environments and releases.
5. Continuous Improvement: Deployment pipelines foster a culture of continuous improvement
and collaboration by providing fast feedback loops, enabling teams to identify and address issues
early, iterate rapidly, and deliver value to customers more effectively.

In summary, deployment pipelines play a crucial role in modern software engineering practices by
automating the process of building, testing, and deploying software changes, enabling organizations to
release high-quality software more frequently, reliably, and efficiently.

9. Overall Architecture Building and Testing

Introduction: The software needs the architectural design to represents the design of software.
IEEE defines architectural design as “the process of defining a collection of hardware and
software components and their interfaces to establish the framework for the development of a
computer system.” The software that is built for computer-based systems can exhibit one of
these many architectural styles.
Each style will describe a system category that consists of :

• A set of components(eg: a database, computational modules) that will perform a


function required by the system.
• The set of connectors will help in coordination, communication, and cooperation
between the components.
• Conditions that how components can be integrated to form the system.
• Semantic models that help the designer to understand the overall properties of the
system.

The use of architectural styles is to establish a structure for all the components of the system.

Taxonomy of Architectural styles:

1] Data centered architectures:

• A data store will reside at the center of this architecture and is accessed frequently by
the other components that update, add, delete or modify the data present within the
store.
• The figure illustrates a typical data centered style. The client software access a central
repository. Variation of this approach are used to transform the repository into a
blackboard when data related to client or data of interest for the client change the
notifications to client software.
• This data-centered architecture will promote integrability. This means that the
existing components can be changed and new client components can be added to the
architecture without the permission or concern of other clients.
• Data can be passed among clients using blackboard mechanism.
Advantage of Data centered architecture
• Repository of data is independent of clients
• Client work independent of each other
• It may be simple to add additional clients.
• Modification can be very easy

Data centered architecture

2] Data flow architectures:

• This kind of architecture is used when input data is transformed into output data through a
series of computational manipulative components.
• The figure represents pipe-and-filter architecture since it uses both pipe and filter and it has
a set of components called filters connected by lines.
• Pipes are used to transmitting data from one component to the next.
• Each filter will work independently and is designed to take data input of a certain form and
produces data output to the next filter of a specified form. The filters don’t require any
knowledge of the working of neighboring filters.
• If the data flow degenerates into a single line of transforms, then it is termed as batch
sequential. This structure accepts the batch of data and then applies a series of sequential
components to transform it.
Advantages of Data Flow architecture
• It encourages upkeep, repurposing, and modification.
• With this design, concurrent execution is supported.
The disadvantage of Data Flow architecture
• It frequently degenerates to batch sequential system
• Data flow architecture does not allow applications that require greater user engagement.
• It is not easy to coordinate two different but related streams
Data Flow architecture

3] Call and Return architectures: It is used to create a program that is easy to scale and
modify. Many sub-styles exist within this category. Two of them are explained below.

• Remote procedure call architecture: This components is used to present in a main


program or sub program architecture distributed among multiple computers on a
network.
• Main program or Subprogram architectures: The main program structure
decomposes into number of subprograms or function into a control hierarchy. Main
program contains number of subprograms that can invoke other components.

4] Object Oriented architecture: The components of a system encapsulate data and the
operations that must be applied to manipulate the data. The coordination and communication
between the components are established via the message passing.
Characteristics of Object Oriented architecture
• Object protect the system’s integrity.
• An object is unaware of the depiction of other items.
Advantage of Object Oriented architecture
• It enables the designer to separate a challenge into a collection of autonomous objects.
• Other objects are aware of the implementation details of the object, allowing changes to be
made without having an impact on other objects.
5] Layered architecture:

• A number of different layers are defined with each layer performing a well-defined set of
operations. Each layer will do some operations that becomes closer to machine instruction
set progressively.
• At the outer layer, components will receive the user interface operations and at the inner
layers, components will perform the operating system interfacing(communication and
coordination with OS)
• Intermediate layers to utility services and application software functions.
• One common example of this architectural style is OSI-ISO (Open Systems Interconnection-
International Organisation for Standardisation) communication system.

Layered architecture:

10 Deployment

Deploying software
Software deployment is the process of making software available to be used on a system by users and
other programs. You might deploy software to create a backup copy of the software, to move the software
to another system, or to create another SMP/E-serviceable copy for installing service or other products.
To assist you with performing these tasks, the Software Management task provides the deployment
capability.
A deployment is a checklist that guides you through the software deployment process. It is the object in
which z/OSMF stores your input and any output that is generated for each step in the checklist. You can
use a deployment to deploy one software instance onto one system at a time.

To view a list of current or past deployments or to define a new deployment, use the Deployments page.
To display this page, click Deployments on the Software Management page or select Deployments from
the Switch To menu provided on the Software Instances page and the Products page.

Deployment checklist
The deployment checklist guides you through the following steps:

• Specifying the name, description, and categories to use for the deployment.
• Selecting the software to be deployed.
• Selecting the objective of the deployment.
• Generating reports that help you identify if SYSMODs are missing in the source software or any
related instances.
• Specifying the data set names, catalogs, volumes, mount points, and SMP/E zone names to use for
the target software.
• Defining the settings to use for the deployment jobs, and generating the jobs.
• Submitting the deployment jobs, and viewing the job output.
• Specifying the name, description, and categories to use for the target software.

When completing the steps in the checklist, you will work with the source for the deployment and the
target for the deployment. The source for the deployment is the software to be deployed or the original
copy. The source can be either a software instance or a portable software instance. The target for the
deployment is the new software copy and will be a software instance. When a deployment is complete,
you will have two copies of the software – the source copy and the target copy.

Figure 1 depicts a sample deployment. The deployment name is ZOSV2R2_TO_COPY_OF_ZOSV2R2. The


deployment was used to deploy source software ZOSV2R2 to another location. The target software is
called Copy_of_ZOSV2R2.Figure 1. Deployment "ZOSV2R2_TO_COPY_OF_ZOSV2R2"

The deployment checklist helps you adhere to IBM® recommendations for software deployment because many
of those recommendations are integrated into the deployment process. For example, when deploying SMP/E
software, z/OSMF:

• Uses SMP/E DDDEF entries to automatically locate data sets, such as SMP/E data sets, target
libraries, and distribution libraries.
• Deploys all of the software included in a target zone and optionally the related distribution zone.
• Copies the SMP/E consolidated software inventory (CSI) with the software. If you currently do
not copy your SMP/E CSIs, you will see a slight increase in DASD usage per target software.

Deployment history

As you complete each step in the checklist, z/OSMF maintains a history or log of your input and any
output it generates in the deployment object. For example, z/OSMF captures the:

• Source software you selected.


• Configuration values you specified for the target software.
• Deployment summary, which describes the impact the configuration values will have on the target
system.
• List of jobs it generated for the deployment.
• Resulting target software.
• User ID of the user who created, modified, and completed the deployment, and the date and time
those actions occurred.

You can use this information to assist you with audits and problem determination. You can also use it to
simplify subsequent deployments by basing them on existing deployments. For more information about
the contents of the deployment history, see help topic View Deployment page. To view the history,
complete the steps provided in help topic Viewing deployments.

• Enabling remote deployment


You can use the Software Management task to deploy software to DASD volumes shared within the
same sysplex (local deployment) or to DASD volumes accessible to another sysplex (remote
deployment). Local deployments are enabled by default. To enable remote deployments, you must
define the system, HTTP proxy, FTP or SFTP server, and FTP or SFTP profile definitions
that z/OSMF needs to complete the request.
• Making changes on the target system
The following table describes the changes that will occur on the target system as you complete each
step in the checklist.
• Defining new deployments
To deploy software, you must define a new deployment. To do so, use the New action provided in
the Deployments table.
• Modifying and resuming deployments
To modify a completed deployment or to resume a deployment that is in progress, use
the Modify action provided in the Deployments table.
• Viewing deployments
To view the deployment history, use the View action provided in the Deployments table.
• Canceling deployments
If you do not want to complete the deployment checklist, use the Cancel action provided in
the Deployments table. Doing so cancels the deployment and unlocks the associated software
instances. After you cancel a deployment, you can modify the corresponding software instances or
use them in other deployments. You can, however, only view or remove the canceled deployment.
• Copying deployments
To copy a deployment, use the Copy action provided in the Deployments table.
• Removing deployments
The list of saved deployments (Deployments table) is provided to assist you with audits and
problem determination. If you want to prune the list, use the Remove action provided in
the Deployments table. Doing so removes the deployment and the corresponding deployment
history from z/OSMF only. No changes are made on the system, and the associated software
instances, global zones, or categories are not removed from z/OSMF.
• Deployments page
You can use the Deployments page in the Software Management task to define new deployments
and to modify, view, copy, cancel, or remove existing deployments.

11.Deployments Tools

In software engineering, various tools are available to facilitate the deployment process, automate tasks,
and streamline the release of software applications. These tools help manage the deployment pipeline,
orchestrate releases, and ensure consistency and reliability across different environments. Here are some
commonly used deployment tools in software engineering:

1. Continuous Integration/Continuous Deployment (CI/CD) Tools:


1. Jenkins: Jenkins is an open-source automation server that supports continuous integration and
continuous delivery pipelines. It allows developers to automate the building, testing, and
deployment of applications across different environments.
2. GitLab CI/CD: GitLab provides built-in CI/CD capabilities as part of its version control and
collaboration platform. It allows developers to define CI/CD pipelines using YAML configuration
files and automate the entire software delivery process.
3. CircleCI: CircleCI is a cloud-based CI/CD platform that automates the testing and deployment of
software applications. It integrates with popular version control systems and provides
customizable workflows for building, testing, and deploying code changes.
2. Configuration Management Tools:
1. Ansible: Ansible is an open-source automation tool that automates configuration management,
application deployment, and orchestration tasks. It uses simple YAML syntax and SSH
connections to manage infrastructure and deploy applications across different environments.
2. Puppet: Puppet is a configuration management tool that automates the provisioning,
configuration, and management of infrastructure resources. It uses a declarative language to define
infrastructure as code and enforce desired state configurations.
3. Chef: Chef is a configuration management tool that automates infrastructure provisioning,
configuration, and application deployment. It uses a Ruby-based DSL (domain-specific language)
to define infrastructure configurations and manage dependencies.
3. Containerization and Orchestration Tools:
1. Docker: Docker is a containerization platform that enables developers to package applications and
their dependencies into lightweight, portable containers. It provides tools for building, distributing,
and running containerized applications across different environments.
2. Kubernetes: Kubernetes is an open-source container orchestration platform that automates the
deployment, scaling, and management of containerized applications. It provides features for
scheduling, load balancing, and service discovery to ensure high availability and scalability.
3. Amazon ECS (Elastic Container Service): ECS is a fully managed container orchestration
service provided by Amazon Web Services (AWS). It allows users to run containerized
applications on AWS infrastructure and automate deployment, scaling, and management tasks.
4. Infrastructure as Code (IaC) Tools:
1. Terraform: Terraform is an open-source IaC tool that allows users to define infrastructure
configurations using a declarative language called HashiCorp Configuration Language (HCL). It
enables users to provision and manage infrastructure resources across different cloud providers
and services.
2. AWS CloudFormation: AWS CloudFormation is a service provided by Amazon Web Services
(AWS) for automating the creation and management of AWS resources. It allows users to define
infrastructure as code using JSON or YAML templates and provision resources in a predictable
and consistent manner.
3. Azure Resource Manager (ARM) Templates: ARM Templates are JSON-based templates
provided by Microsoft Azure for defining and deploying Azure resources. They enable users to
automate the provisioning and management of Azure infrastructure using declarative
configurations.
5. Release Management and Orchestration Tools:
1. Spinnaker: Spinnaker is an open-source continuous delivery platform developed by Netflix and
Google. It provides a flexible and extensible platform for orchestrating and automating software
deployments across different cloud providers and environments.
2. Octopus Deploy: Octopus Deploy is a release management tool that automates the deployment of
applications and infrastructure. It provides features for managing release pipelines, promoting
releases between environments, and implementing deployment best practices.
3. XL Deploy: XL Deploy is a deployment automation tool that helps organizations automate the
deployment of applications and middleware across different environments. It provides features for
modeling, orchestrating, and visualizing deployment pipelines.

These tools play a critical role in modern software engineering practices by automating the deployment
process, managing infrastructure configurations, and ensuring consistency, reliability, and scalability
across software deployments. Depending on the specific requirements and preferences of an organization,
different tools may be chosen to support the deployment and release management lifecycle.

12.Case Study – Project Management


Let's consider a case study on project management in software engineering for a hypothetical software
development project:

Project Overview:
Project Name: Online Task Management System (OTMS)

Objective: Develop a web-based task management application to help teams organize, track, and
prioritize tasks and projects efficiently.
Project Scope and Requirements:
1. User Roles:
• Admin: Manage users, projects, and permissions.
• Team Members: Create, assign, and update tasks.
2. Key Features:
• User Authentication and Authorization
• Task Management (Create, Assign, Update, Close)
• Project Management (Create, Update, Delete)
• Task Status Tracking (Open, In Progress, Completed)
• User Notifications and Reminders
3. Technology Stack:
• Frontend: React.js
• Backend: Node.js with Express.js
• Database: MongoDB
• Authentication: JSON Web Tokens (JWT)
Project Management Approach:
1. Agile Methodology:
• Adopt Agile principles and practices, including iterative development, continuous feedback, and
adaptive planning.
• Use Scrum framework with sprints of 2 weeks duration for incremental development and delivery.
2. Team Structure:
• Scrum Master: Responsible for facilitating Scrum events, removing impediments, and ensuring team
adherence to Agile principles.
• Development Team: Cross-functional team comprising frontend and backend developers, UI/UX
designers, and QA engineers.
3. Project Phases:
• Initiation Phase: Define project objectives, scope, and requirements. Create project plan, schedule,
and resource allocation.
• Planning Phase: Break down requirements into user stories, tasks, and acceptance criteria. Prioritize
user stories based on business value and complexity.
• Execution Phase: Implement user stories and features in sprints. Conduct daily standup meetings,
sprint planning, sprint review, and retrospective meetings.
• Monitoring and Control Phase: Monitor project progress, track sprint velocity, and identify and
address issues and risks in real-time.
• Closure Phase: Conduct final acceptance testing, document lessons learned, and prepare for project
deployment and release.
Project Tools and Technologies:
1. Project Management Tools:
• Jira: Use Jira for backlog management, sprint planning, task tracking, and issue resolution.
• Confluence: Collaborative documentation tool for capturing project requirements, user stories, and
design specifications.
2. Version Control:
• Git: Utilize Git for version control, branching, and merging of code changes.
3. Communication and Collaboration:
• Slack: Real-time messaging platform for team communication, collaboration, and coordination.
• Zoom: Conduct virtual meetings, sprint reviews, and retrospectives with distributed team members.
4. Testing and Quality Assurance:
• Jest and Enzyme: Unit testing frameworks for frontend React components.
• Postman: API testing tool for testing backend RESTful APIs.
• Selenium WebDriver: Automated browser testing for end-to-end testing of web applications.
Project Deliverables and Milestones:
1. Minimum Viable):
• Basic Product (MVP functionality including user authentication, task creation, assignment, and
status tracking.
• MVP delivered by the end of Sprint 3.
2. Enhancements and Iterations:
• Incremental feature additions and improvements based on user feedback and stakeholder input.
• Regular sprint reviews and retrospectives to identify areas for optimization and refinement.
Project Success Metrics:
1. User Adoption and Satisfaction: Measure user engagement, feedback, and adoption rates through user
surveys and analytics.
2. Delivery Timelines and Budget Adherence: Track project progress against planned timelines and budgets.
3. Quality and Reliability: Monitor defect rates, system performance, and user-reported issues post-
deployment.
4. Stakeholder Satisfaction: Assess stakeholder satisfaction through regular project status updates, demos,
and feedback sessions.
Conclusion:
Effective project management is critical for the successful delivery of software projects. By following
Agile principles, leveraging appropriate project management tools, and fostering collaboration and
communication among team members, the Online Task Management System project aims to deliver a
high-quality, user-friendly software application that meets the needs and expectations of its stakeholders.

You might also like