You are on page 1of 31

Software development live cicle

In this video 

Waterfall model: Brief overview


- [Instructor] The Waterfall approach appears most intuitive to us at the first sight. You
gather requirements for a project. Once the requirements are complete, you complete
the design. When the design in complete, you write code and complete development
and so on. The Waterfall model owes it's origin to manufacturing. When we think of
manufacturing industry, we often think about a production plant that produces identical
products in a consistent manner. All planning is done up front with detailed planning
documentation and the scope of work is generally fixed. The process of manufacturing
products is largely automated and includes well-defined checklists, processes, and
tools. The general pattern here is that the output of a phase becomes input of the next
phase. If you introduce an error in one phase, that error will propagate to all other
phases. For example, if you have incorrect or incomplete requirements, those
requirements will lead to incorrect design and that design will make it to an incorrectly
developed or missing software feature. Many traditional Waterfall projects have fixed
scope because you tend to freeze the scope of one phase before starting the
next. Processes and final product are generally very well documented in a Waterfall
project. This is how the Waterfall lifecycle looks like. You start with what is called
Requirement Analysis phase where you capture requirements. This is followed by
Analysis and Design phase where you produce high level design and test
specifications. This is followed by the Development phase where you build the software
system. The next phase is the Test phase where you match the output of the system with
the expected outputs defined in the test specifications. And the last phase in the
Waterfall model is the Deployment and Maintenance phase where the application is
deployed to production and ongoing maintenance continues. In each phase, the output
of the previous phase is the input to the next. The input to Analysis and Design phase is
the output of the Requirements phase. So, what's wrong with the Waterfall model? The
first problem is that the customer does not get to see the product before the early
testing phase which is usually two thirds the way through the product time line. You
could be in the Deployment and Maintenance phase when you could realize that the
product you are building was no longer viable due to change in market conditions, or
organizational direction, or changed computer landscape. Or you could realize that the
product had a major architectural flaw that prevented it from being deployed. In other
words, your product development initiative could completely fail after a lot of money
and time had been spent on it. In software development, everything
changes. Requirements, skills, people, environment, business rules, et cetera. As time
progresses, you learn better techniques of doing things. Your stakeholders need to
change requirements to match changing organizational strategy or changing market
conditions. In other words, the only guaranteed thing is change and the shown process
to refine our work. Software development is inherently an iterative process and does not
work like a Waterfall cycle. Over emphasis on checklists and controls does not
help because software development is human centric and is heavily dependent on
judgment and creativity. Software is not a product designed to be built by assembly
lines.

Waterfall model: Application


- When waterfall projects started failing, many organizations treated this failure as if
there was a failure in a production factory. So they tried to fix their waterfall
approach, by adding more comprehensive documentation. Having a well documented
software system is good. But the documentation by itself adds no value to the stake
holders. Also many software teams resorted to maintaining comprehensive checklist, to
make sure they were producing systems of high quality. Checklist such as coding
standards and architectural reviews are helpful. But you cannot produce a single recipe
book for building software. So more time should be spent on maneuvering working
software features early and often. And enlisting customer feedback. Other situations
where waterfall approach may be applicable. Waterfall may still work fine for very simple
and small systems. Enchancments to software systems in an ongoing maintenance
phase, might work with the waterfall model. This is specifically applicable if the
development team has good domain knowledge. and both business and technical stake
holders are good at working with each other. The waterfall approach may be applicable
to mission critical systems. Where you need gated checks to avoid catastrophic
failures. An example is a software system where a defect can cause human
causality. Comprehensive documentation is also very applicable here.

Spiral model
- [Narrator] The Spiral development model was presented by Barry Boehm in his
research paper in 1986. It was one of the oldest software development models that
proposed an iterative development approach to building software. The development
approach was a mix of Waterfall and Iterative development models. The theme of this
approach was that building software iteratively leads to identification and easier
management of risks. If we were to visualize this model on a straight line, it is essentially
a series of Waterfalls. Each Waterfall contains four phases: Planning, Risk Analysis,
Engineering and Evaluation. So each Waterfall consists of four phases. When you start
an iteration, you build on top of the output of the previous iteration. One key thing to
note here is that each iteration is different from the previous iteration because, as you
build a system, you get better understanding of the requirements and continue to
mitigate risks. And last, but not the least, risk management is an integrated part of Spiral
Model. Let's review Spiral Model in a little more detail. The first phase is called
Planning. This phase includes requirements identification and analysis, identification of
stakeholders, and lifecycle objectives of the system being built. Also, you identify win
conditions, that would define what is considered success for your project. The second
phase is called Risk Analysis, and includes risk-related activities, such as risk
identification, risk prioritization and mitigation. This is the phase when you
build prototypes to mitigate risks. You may undertake activities such as
identifying alternate solutions, so you can reduce or avoid risks. In the early iterations,
you may have a simple prototype, but in the later iterations, you may build a complete
prototype, or a release candidate. The third phase is the Engineering phase, where you
perform software implementation activities, such as detail design, coding, unit and
acceptance testing and deployment. In the early iterations, you may have something as
simple as a design model, but in the later iterations, you may end up coding and
deploying a complete solution. Evaluation, this is the phase where you get your
stakeholder review and feedback, and plan the next iteration. So in the first Waterfall,
represented by this spiral, you may produce just a prototype for early feedback. The
second Waterfall is built on top of the first Waterfall, and may produce a release
candidate. The third Waterfall could produce a launch candidate. You can visualize Spiral
Model life cycle with the help of a graph. The four quadrants of the graph represent four
phases of the Spiral Model, starting with the top left quadrant and going clockwise. You
start with the first iteration and transition through the four phases iteratively. The top
right quadrant is focused on risk management, and most of the prototyping work is
done here. The bottom right quadrant is where you perform software engineering
activities. The bottom left quadrant ends with the customer's evaluation of the
product. X-axis here represent volume of approval by review. I would like to repeat here
that each iteration builds upon the output of the previous iteration. The Y-axis in this
graph represents cumulative cost of the product being built. As you can see, the size of
the spiral grows with time, and the overall shape of this graph led to this model being
named as Spiral Model. The Spiral software development model was a pioneering
approach at that time to identify that software development is inherently and iterative
process, something that modern Agile processes identified and advocated a few year
later, so this model was ahead of its time. One of the four phases in the model is Risk
Management. Prototyping is extensively used in the Spiral model, and staying focused
on risks reduces the chances of project failures.
Rational Unified Process: Overview
- [Instructor] Rational Unified Process, or RUP, was an attempt to come up with a
comprehensive iterative software development process. RUP is essentially a large pool
of knowledge. RUP consists of artifacts, processes, templates, phases, and disciplines. It
has detailed documentation, guidelines, sample artifacts, and deliverables. RUP is
defined to be a customizable process that would work for building small, medium, and
large software systems. Since the process is customizable, you can choose what you
want, and also customize your process with what are known as plugins. RUP has a
fascinating history. This is not a course on RUP, so don't worry if you're not familiar with
these terms. We will just skim the surface of RUP so we can learn something from
it. Before we go ahead and dissect RUP, let's review its history. Back in the early to mid
1990s, a company called Rational Software developed the Rational Unified Process as a
software process product. This was the era of object-oriented programming, and a set
of standard notations to represent an object-oriented system called UML, or Unified
Modeling Language, was becoming very popular. RUP was greatly influenced by object-
oriented analysis and design, and UML. The early effort to define RUP was led by
Philippe Kruchten, a Rational Software technical staff member, and his team. This effort
was combined with and influenced by other approaches from evangelists and subject
matter experts such as Grady Booch, famous for what is known as the Booch
method, Jim Rumbaugh's Object Technology Method, et cetera. In February 2003, IBM
acquired Rational Software. A few years later in 2006, IBM created a subset of
RUP, which is more agile centric and is called OpenUP. RUP has four phases: Inception,
elaboration, construction, and transition. These phases should not be confused with
requirements, analysis and design, development, and testing phases of the waterfall
model. Instead, view these phases as container for small waterfall-like iterations. In other
words, each phase has one or more iterations. Each RUP phase ends with a milestone. At
the end of the inception phase, you will have achieved what is known as the Lifecycle
Objectives Milestone. So you will have all stakeholders agree to what you are going to
build. The elaboration phase ends with base lining the architecture of your software
system, and this landmark is called Lifecycle Architecture Milestone. Construction ends
with achieving Initial Operational Capability Milestone, which is where your software
product is ready to be used by end users. Transition is the phase where you fine tune
your application to make it fully usable at production scale, and this is where you
achieve Production Release Milestone. Each phase has one or more iterations, or mini-
waterfalls. Activities that are logically similar are grouped into RUP disciplines. The
original RUP had six disciplines, called Business Modeling, Requirements, Analysis and
Design, Implementation, Test, and Deployment. They added three more later,
called Configuration and Change Management, Project Management, and Environment.
Rational Unified Process: Life cycle
- [Instructor] This is an example of Rational Unified Process life cycle for software
development. In our example, there is just one iteration in the inception phase which
results in the achievement of the life cycle objectives milestone. Two iterations in the
elaboration phase results in baselining the system architecture, or the life cycle
architecture milestone is achieved at the end of this phase. Three iterations in the
construction phase make the software product ready for end users, and so the initial
operational capability milestone is achieved. Finally, two iterations in the transition
phase makes the software optimized for production, and the production release
milestone is achieved at the end of this phase. Please note that the number of
iterations and the duration of iterations in a RUP cycle may vary from project to
project. Let's review a similar RUP life cycle but this time from the perspective of what
happens inside each iteration. Note that the inception iteration I want is heavy in
business modeling and requirements. Construction iterations are heavy on analysis and
design, implementation, testing, and deployment. Even within a phase, for example,
elaboration, the iterations may be different. For example, the first iteration in
construction phase is heavier in design as compared to the third iteration. If RUP was so
good, why is no one talking about RUP today? It's because RUP was a heavy
process with a lot of documentation. Many people disliked it because they thought they
had to follow RUP religiously and implement all processes and artifacts. RUP was
designed to be customizable, but it was a little too prescriptive and heavy which led to
its downfall. RUP had templates for everything, and each template was pretty
comprehensive. Unfortunately, it needed a significant amount of work to remove or
customize sections in such large templates. Working with RUP was like going to a buffet
with 500 items where it was a struggle to pick the 10 items you want to try. What is
there to learn from RUP? I think many modern software engineering practices taught by
RUP were pathbreaking and are still relevant. Let me mention the six RUP best
practices. The first one is develop iteratively. All modern agile processes and
frameworks recognize that software development is essentially an iterative and
incremental process. The second and third practices of RUP are manage requirements
and manage change. Agile organizations allow and encourage changes and
requirements and also make extra efforts to manage requirements and manage changes
of any kind. The fourth best practice of continuously verifying quality is the cornerstone
of continuous integration and deployment practices of DevOps. A picture speaks a
thousand words. Visual modeling of software with rational rows and other types of
software was pioneered by RUP. The sixth best practice of using component-based
architecture facilitates creation of software based on smaller and more manageable
components. Many customizable versions of RUP such as the ICONIX process and
Enterprise RUP were successfully applied to many small, medium, and large software
development projects.
2nd chapter
Dynamic systems development method (DSDM)
- [Narrator] DSDM stands for Dynamic System Development Method. It was developed
in 1994. This was the era where organizations were slowly going away from the waterfall
approach and construction industry like formal processors. When DSDM was
introduced, a new approach called RAD or Rapid Application Development was
becoming the norm. It was fairly easy to build mocked up screens, get quick customer
feedback and build system features very quickly. While this approach was very agile, it
was also a growing concern among organizations that Rapid Application
Development needed some structure or formal processors to maintain good software
quality. So a group of organizations built a framework for project delivery and project
management called DSDM. This framework had some fairly tight set of rules and was
designed to be compatible with IS0 9000 and prince 2, a project management
framework, very popular, mostly in Europe. The DSDM Consortium was formed in
1994 by a group of organizations. DSDM continued to be the project
development standard in most of Europe for the next several years. Please note that
they viewed software system building initiatives as projects, rather than products. In
2014, the DSDM Handbook was made available online, to the public. In October 2016,
the DSDM Consortium changed its name to Agile Business Consortium and they own
the DSDM Framework. You can view and download DSDM templates from the resources
section of the Agile Business Consortium website. At a very high level, the DSDM life
cycle consists of three phases - the pre-project phase is executed at portfolio and
executive management level where projects are identified. And funding commitments
are made. The second phase is called the project lifeline phase, consisting of several
sub-phases. Additional feasibility analysis is done for the projects identified in the
first two sub-phases and this is followed by sub-phases where system is built in
an iterative and incremental manner. The last phase is called post-project, which is kind
of a post-modern phase. This is when you determine if the expected benefits of the
project have been realized. Also, you execute activities around looking at ways of
improvement based on studying and discussing what went and what did not go
well during the execution of the project. The key thing to note here is that DSDM life
cycle is not a waterfall like life cycle. Different phases can be repeated and you can
iterate between phases. Dynamic system development method mindset can be distilled
to eight key principles. You can find a description of all eight principles on the Agile
Business Consortium website. For example, let's take a look at the eighth principle called
demonstrate control. The goal here is to have a highly visible plan and manage it
carefully and diligently so you do not deviate from your plan and keep things under
control. DSDM advocates the use of several proven practices. Many of them are very
applicable today. Firstly, it talks about timeboxing your activities, this means each
activity is assigned a maximum time limit, which cannot be extended. When the time
limit expires, you stop working on that activity. Timeboxing forces teams to stay
focused and avoid unnecessary refinements. All teams and organizations have
budgetary time and resource constraints, so requirements must be prioritized. DSDM
uses MoSCoW prioritization on requirements which categorize requirements into must
have, should have, could have and won't have. And last, but not the least, DSDM
advocates an iterative and incremental approach which is still the most acceptable way
of building software.
Feature-driven development (FDD)
- [Instructor] Feature-Driven Development or FDD is a lightweight and agile process. In
the world of FDD, software is viewed as a collection of working features. A feature is just
a piece of working functionality that has business value. Each FDD team tries to deliver
working software, which is composed of working features. At a high level, FDD is an
iterative process with five steps that will be explored later in this lesson. Let's review
examples of a feature. In a nutshell, a feature is a small piece of working
functionality, typically expressed as action, result, object, for example, in the
example, calculate monthly interest on account balance. Calculate is a verb, that
represents action. Monthly interest is the result. And account balance is the object on
which an action is performed. Let's review the FDD life cycle, which consists of the
following five steps, develop overall model, in this phase, a high level initial domain
model is built based on the team's understanding of the problem domain. This is like a
class diagram, which depicts most of the business concepts in your problem
domain and how they are related to each other. Build feature list, in this phase, the
domain is divided into subject areas and each subject area contains business activities or
work flows. Steps in these activities are identified as features. A guideline in FDD is to
keep features very fine grained so they can be implemented in just a short iteration of
two weeks or less. In the plan by feature phase, features are prioritized and a feature
implementation plan is developed. This includes, feature assignments to developers. In
the design by feature phase, more details are added to the classes in the domain
model. Sequence diagrams are developed to flesh out the implementation details. The
goal here is to have a design model that can be used to implement the
features. Thorough inspection of design is done to make sure the design will be able to
meet requirements. Build by feature, this is when the classes that implement features are
implemented, tested, inspected, and deployed. The last two steps, design by feature and
build by feature are executed in parallel by different feature owners for different sets of
features. FDD was developed in the era of unified modeling language. A set of notations
to represent an object oriented system. So a lot of UML and object oriented design is
used in FDD. Tracking status updates in any software development is challenging,
subjective, and prone to human judgment. We looked at the five steps of FDD. The last
two steps included multiple milestones. Each milestone in FDD has a percentage
completion assigned to it. So if a feature is done up to a milestone, the completion
percentage is the sum of completion percentages of all milestones up to that
milestone. In our example, the percentage completion is the sum of percentage
completions of milestones 1.1, 1.2, 2.1, and 2.2, and it is 60%. FDD is focused on
features, something that provides business value to customers. Team organization in
FDD is based around features and those teams, also called feature teams have the skills
to implement end to end functionality. This is a great practice implemented by
FDD. FDD scales very well because teams can work in parallel on multiple features and
can integrate or reuse their work whenever necessary. In fact, the earliest FDD
implementations where used to build large banking systems successfully. The ability to
track completion status is another great characteristic of FDD that was discussed earlier.

Crystal methods overview


- [Man] Crystal, refers to not just one methodology, but a family of methodologies. Each
methodology is represented by a specific color, and the methodology you choose
depends on two factors. Team size, and criticality of the product being built. Crystal
methods evolved as a result of research being done by Alistair Cockburn in the
1990s. Crystal methods are people-centric, light-weight, and highly flexible. Let's review
the crystal family of methodologies in a diagram. The Y-axis represents what is known as
criticality. In other words, it represents the severity of the damage caused by
malfunction of the system being built. The X-axis represents team size. Each vertical
column represents one crystal methodology. Let's review criticality first. At the lowest
level of criticality is comfort, this means the system malfunction for the system being
built can just cause loss of comfort. An example is a mobile app that adds special
effects to pictures but hangs once in a while. This is at the bottom of the Y-
axis. Discretionary money is defined as extra savings at an organization or individual's
disposal. So, a system malfunction at this level of criticality, would cause loss of
discretionary money. Up next in the criticality ladder, is the software where a
malfunction could cause loss of essential money. For example, loss of money to a
store, caused by their website being down for a while. At the other extreme of
criticality, is a software, where a malfunction could result, in loss of life. For example, a
software defect in an aircraft's navigation system's software. This is at the highest level
of criticality, as shown at the top of the Y-axis. The X-axis represents team size. Please
note, that it's just the team size that drives which crystal methodology, like crystal clear,
or crystal orange, needs to be picked. As you move up, or to the right in the graph, the
criticality of your software system, and team size increases, and you need more formal
procedures and deliver levels. For example, if you need a team size of 30 persons, and
you are building a software where a defect could result in loss of essential money, you
need to have orange E-40 project. Note, that each vertical lane represents a family of
crystal project. In our example, we need a crystal orange method. The simplest crystal
method is called crystal clear, which requires the least number of roles and deliver
levels. For example, crystal clear just needs three roles: sponsor, senior designer and
programmer. And just one deliver level: working software, with minimal
documentation. A crystal orange method needs more roles, such as architect sponsor,
business analyst, project manager, etc. And more deliver levels such as requirements,
document, user interface design, user manual and test cases. The crystal method you
choose depends on the team size, which was shown on the X-axis of the graph in the
previous slide. But as the criticality of the system being built increases, you may need to
tweak your processes, to address the extra risk involved there. This was shown on the Y-
axis in the previous slide. For example, if a system malfunction can cause loss of human
life, you may need to introduce stricter quality checks. Crystal methods advocate a
methodology, where people are at the center of the universe. In Alistair Cockburn's
research, he determined that the most important factors for a project's success are
people's interactions, effective communication and collaboration. Crystal methods was
perhaps the first approach that prioritized people over formal processes. Crystal
methods uses frequent delivery of working software, and teams improve their efficiency
and output by a consistent process of reflective improvement. The crystal methods
approach brought forward the point that there is no "one size fits all" for all
projects. Crystal methods includes many modern practices, such as automated tests and
frequent integration that are still in use today. Crystal methods did an excellent job of
communicating the message, right size the team to fit the scope of your project. But, it
wasn't the best strategy to manage each of these sized projects. That's the key reason
that crystal methods was not more widely adopted, as there were other, better
methodologies.
Scrum overview
- [Instructor] Scrum is the most popular and widely adopted agile framework. It is a
lightweight framework with just a handful events, roles and deliverables and a small set
of rules and guidelines. Scrum is often described as a deceptively simple framework. The
inventors of Scrum defined it as a framework that is easy to understand and difficult to
master. Most people get up and running with Scrum in just a couple of hours. But
getting optimal output with Scrum requires experience and practice. If I were to describe
Scrum in one sentence, I would describe it as, an agile framework, where a small cross-
functional team works in short iterations, called sprints, to create working
software. Scrum was introduced by Ken Schwaber and Jeff Sutherland in 1995. The most
authoritative resource on Scrum is the Scrum Guide, which is available at
scrumguides.org. It is less than 20 pages and you can read the entire guide in an hour or
less. This is a high-level overview of the Scrum workflow. Development Team members,
Product Owner and Scrum Master collaborate to work in short iterations of 30 days, or
less, called sprint. They start from a to do list of work items, called product backlog and
pull a subset of work items to implement in a sprint. The subset of work items and a
working plan is called the sprint backlog. The development team conducts their daily
sync ups called daily Scrum meetings and continue to implement work items. At the end
of the sprint, product increment, in the form of working software is produced. The three
Scrum roles are, Product Owner, Scrum Master and Development Team member. Let's
take a look at each role in a little more detail. The Product Owner, owns the product
backlog. The product backlog is the list of all to do items, such as tasks, defects, user
stories, functional and non-functional requirements, enhancements, et cetera for the
development team. And the Product Owner is accountable for the contents of the
product backlog and is the final authority on the order, or priority of items in the
product backlog. The Product Owner has the final call on what needs to be built and in
what order. Even though the Product Owner may be influenced by a committee of
business stakeholders, the Product Owner role is fulfilled by an individual, not a
committee. Scrum Master is the agile coach. Think of a Scrum Master as a guide or
mentor, who teaches the Scrum team, Scrum rules and practices and facilitates Scrum
events if necessary. They also help the team optimize team performance with those agile
practices. Scrum Master is a servant leader in the sense that they lead the team by
helping them remove hurdles that prevent the team from performing optimally. Scrum
Masters do not manage the Development Team, they coach and convince the team to
work on an optimal path. Development Team member is a generic umbrella name given
to all Scrum team members such as, developers, testers, documentation experts,
database administrators, system administrators, et cetera. The Development Team is
cross-functional, which means the entire team combined has all the skills necessary to
build, test and deploy fully-functional software features. Development Team members
are self-organizing, which means they know how to do their work with minimal
interference, or help from outside. There are three Scrum artifacts, or
deliverables. Product backlog, sprint backlog and product increment. The product
backlog is the master list of what needs to be built. The second artifact is the sprint
backlog, which includes a subset of product backlog items selected to be implemented
in the current sprint, plus a task list for those selected items. The third artifact is the
product increment, a slice of functionality produced in the current sprint, combined with
the functionality produced by all previous sprints.

Scrum workflow
- [Narrator] Let's see how a Scrum Team builds a product incrementally and
iteratively. At the beginning of the Sprint, the Scrum Team comes together for an event
called Sprint Planning. In this event the Development Team works with the Product
Owner to select which items to work on. The Scrum Team also drafts a Sprint
Goal, which is just a few sentences that describe the high-level business goal of the
Sprint. Then the Development Team builds an initial plan on how they will implement
those Product Backlog items. The Scrum Master is available to facilitate, as and when
required, and the Product Owner is available to answer any questions. The Development
Team spends the time in the Sprint developing the features listed, testing to ensure that
they meet specifications, and compiling for a workable software deliverable. The Sprint
concludes with a Sprint Review and a Sprint Retrospective. This is another look at the
overall view of how the Scrum life cycle works. We already covered Sprint Planning, let's
take a deeper look at several key Scrum events: Daily Scrum, Sprint Review and the
Sprint Retrospective. The Development Team continues to meet each day in an
event called the Daily Scrum, which is timeboxed to 15 minutes. The Scrum Master and
Product Owner can participate in this event, but are not required attendees. This is a
Development Team hug or sync-up, not a status meeting. It is an opportunity for the
Development Team to measure if the team is on track to meet the Sprint Goal. Many
Scrum teams follow the three question format that means each Development Team
member answers three questions. One, "What did you do since the last Daily
Scrum?" Two, "What are you planning to do today?" Three, "Are there any
impediments to what you are trying to accomplish?" 3QF is not mandatory, and the
Development Team can conduct this event in whatever way they want. The
Development Team produces a Product Increment at the end of each Sprint, which is
reviewed in an event called Sprint Review. This is when the Scrum Team and a group of
stakeholders get together to inspect the Product Increment. This is an informal event,
and the feedback here is used to plan future Sprints, and not seek formal
acceptance. This event is timeboxed to four hours for a 30 day Sprint and must be
adjusted for shorter Sprints. The last event of the Sprint is called Sprint
Retrospective, when the Scrum Team inspects every aspect of their work other than the
Product Increment itself. They discuss, and come up with an Action Plan on how they
can do better with their processes, team communication, tools, skills, etcetera. The team
commits to a subset of their Action Plan items. This event is timeboxed to three hours
for a 30 day Sprint and must be adjusted for shorter Sprints. If I were to pick two key
aspects of Scrum, I would highlight these two: One, it is simple but comprehensive
framework to build products incrementally and deliver business value faster and more
often. Two, one key rule of Scrum is to produce a potentially shippable Product
Increment at the end of each Sprint.
Help/Fee
Lean overview and key concepts
- [Instructor] Lean originated in the post-World War II era. This was the time when many
economies were devastated and people had poor buying power. There was a lot of
demand to rebuild factories and manufacture products with maximum efficiency. Toyota
production system, or TPS, was one such efficient and popular system. It was a
combination of management style, working culture and production environment that
was aimed at reducing waste and manufacturing at optimal efficiency. Toyota's
manufacturing process inspired Mary and Tom Poppendieck's 2003 book, "Lean
Software Development," where they taught how many of these manufacturing
philosophies and techniques could be applied to software development. Lean principles
can be applied to software and hardware projects. Lean software development is a
collection of principles centered on the key tenant, to minimize waste. This is achieved
through processes that visualize production pipeline. Various techniques are used to
highlight bottlenecks and reveal efficiencies or inefficiencies within the system. A
common technique that can be used to identify waste is called value stream
mapping. Let's focus on it first. Think of a value stream map as a workflow or a sequence
of steps to produce a product or service with business value. For example, the sequence
of steps that starts with ordering a pizza to the point when the pizza is delivered, can be
called a value stream. A value stream contains steps that add value and, in a perfect
world, you'd like the value stream to have all the steps with value-add and no wasteful
step. This is a slightly over-simplified example of a manufacturing value stream
map. Information flows from right to left in the upper part of the diagram, where a
customer submits an order to an office, which orders raw materials from a supplier. This
is the information flow part of VSM, or value stream map, and would be produced for an
aggregate of customer orders. In the middle, is the materials flow that shows the
sequence of steps to produce the product. At the bottom is the time ladder, where the
lower staircases show actual processing time, whereas elevated staircases show lead
time, or the total time elapsed in a manufacturing step. As you can see, the total lead
time, or total elapsed time, of all steps combined in the materials flow is 130
minutes, whereas the real processing time is 65 minutes. So there is apparently 65
minutes of non-value-added time that could be reduced. So let's summarize what we
learned about value stream mapping. We use value stream mapping to identify areas of
improvement in a value stream. A typical manufacturing value stream map depicts all
steps at the macro level and includes three types of information, information flow,
material flow, and what is called a time ladder. The time ladder at the bottom of a value
stream map is critical because it identifies value-add time, as well as non-value-add
time. A value stream map, or VSM, contains information flows such as information
flowing from a customer to an office and a warehouse. You can also visualize flow of raw
material from a supplier to a factory, up to a shipping truck that transports the finished
product. A time ladder shows value-added time and total value consumed for producing
a product or service. We will see another example of VSM shortly.

Lean value stream mapping


- [Instructor] As we look at value stream mapping, I'd like to introduce you to two key
terms before we proceed. Lead time. Lead time is defined as the total elapsed time from
a time a request for a product or service is made to the time the product or service is
made available. If you receive pizza within 45 minutes of ordering it, that is the lead
time. Cycle time is a subset of the lead time. And that is the duration of time where
some work is performed on the product. In our pizza deliver example, this is the time
spent on steps like baking, packaging, et cetera. You can define lead time and cycle time
of a step in a work flow or the entire work flow. Lead time is greater than or equal to
cycle time, and the goal should be to make lead time equal to cycle time. Let's take a
look at a business process values stream map. This is similar to the one we reviewed
earlier, but it does not have an information flow section. It shows an insurance claims
processing work flow with the following steps. The customer files an insurance
claim. The claim is reviewed by an adjuster. The adjuster's role is to review the
damage and record all evidence of claim. After the adjuster enters all the
information into the claims system, the customer service representative notifies the
customer about the claim. Based on the customer's feedback, the representative
updates claim information, which is forwarded to the payment system, which issues the
claims funds. Do you see any red flags in this VSM? There are multiple areas of
improvement in this process. One thing that stands out is that there are multiple
systems that are used in processing the claim. This probably mean duplicate date
entry and possible extra steps to keep the claims and payments system in sync. Also, the
total lead time is 121 hours, and total processing time is 3.25 hours, which is three
percent of the total lead time. You can built the as is and 2B VSMs off your
processes and look for areas of improvement.

Lean principles
- [Instructor] Lean software development is a collection of principles centered on
maximizing efficiencies and minimizing waste. Let's dig deeper into the core principles
of lean. Let's continue to review lean principles as applicable to software
development. Eliminate waste. Lean thinking teaches to think us from the perspective of
value addition for the customer. Any process or work that does not add value is
waste. Firstly, we need to understand what is value to our customer. Secondly, when we
produce software features, we need to produce that value. Nothing more, nothing
less. Adding more software features that the customer has not asked for is called gold
plating and is a waste. We need to understand implicit requirements but that is
something that should be learned with the help of continuous feedback from the
customer. Unnecessary processes or switching between tasks, also called context
switching, are other examples of waste. Amplify learning. HR practices are iterative with
delivery of business value, early and often. Software development is complex with the
potential for changing requirements and priorities. The recommended lean practice is to
constantly learn by getting regular and frequent feedback from stakeholders. Decide as
late as possible. This principle appears counterintuitive but is a good practice and
applicable to software development. Many methodologies were developed in the last
century that advocated detailed checklists that provided clear decisions and policies
defined early in the process. Making early decisions was fraught with risks because
everything changes in software development. Customer requirements change, market
conditions change, technologies evolve with time and change. Late decision is actually
an inappropriate term here. The appropriate term is last responsible moment. We need
to wait till we know all the facts and then make a decision at the last responsible
moment. You can only define a database tuning approach after you know what
database engine will handle your customers data volumes and workloads. Deliver as fast
as possible. When you deliver working features to customers quickly and often, you are
bringing value to the customer. Longer iterations have bigger risks of large number of
unidentified issues, changes in market conditions, and technology changes. Smaller
iterations are easier to manage. Empower the team. Agility is all about bottom-up
intelligence. Empowering a team fosters creativity and motivates the team to do
more. People that design and build software are knowledge workers and should be
given enough autonomy and support to bring out their creativity and sense of
responsibility to build great solutions. Build integrity in. Lean teams focus on making
sure the customer has good overall experience of the system. This is called perceived
integrity. That is how the system is delivering value and is usable and easy to
maintain. The other type of integrity is called conceptual integrity which works at a more
micro level. In this case, the system works like a well-oiled machine that comprises of
good, well-tested components. Lean teams build integrity right into their process of
building software system through refactoring, thorough testing, and consistent and
frequent communication with the customer. Building integrity should be an intrinsic
part of the entire process, not an afterthought. See the whole. Lean teams focus on
understanding the entire workflow and work on optimizing the entire process, rather
than improving a subset of the entire workflow. We are often tempted to improve just
one step in the process because each step improvement will improve the entire process
but in many cases, you cannot optimize the entire process without getting the big
picture and knowing the entire process. You can expedite software defect identification
process but that does not optimize the entire development, maybe because you have a
skills problem with the developers.

Kanban
- [Instructor] Kanban is a work process management methodology based on lean
principles. Kanban can be summarized in two key concepts. One, visualize your work,
and two, limit work in progress. Even though Kanban looks simple at first sight, it is a
very powerful approach based on advanced queuing utility. Let's see an example of how
a software development team applies Kanban. This is an example of a Kanban
board developed with Lean Kit's IDE. Each vertical lane in the board represents status of
a work item. In our Kanban board, we have four work item statuses: planning and
coordination, design, develop, and accept. Work items such as software features, user
stories, and defects are pulled into the board and processed through the board from left
to right. All completed work items are eventually pulled to the rightmost lane. Kanban is
a visual system because Kanban board gives you a quick summarized view of the work
in progress. Each lane can be viewed as a work queue and has a work in progress on rip
limit. The rip limit indicates the maximum number of items that should be added to that
lane. Limiting count of items in a queue is required to work at optimal efficiency. Rip
limits can be justified in arithmetic, as well as non-arithmetic terms. The non-arithmetic
lay person's explanation is that the rip limit in each lane is dependent on the number of
people working on items in that queue of that lane and their skills and experience
level. If you work on too many items simultaneously, you will end up working at sub-
optimal efficiency. The arithmetic explanation of placing work items is based on queuing
utility, centered at a formula defined in the 1950s called Little's Law. Let's do a quick
review of this formula. Even though Little's Law was originally stated in a totally different
context, it can be stated in the work management as a simple arithmetic expression as
follows. The number of items in a work queue is equal to average time spent on each
work item multiplied by the arrival rate of work items. Or the average time spent on
each work item is equal to number of items in a work queue divided by arrival rate of
work items. So if you wish to work at higher efficiency, that is spend less time per work
item, you can either reduce the number of work items in the work queue, or limit rip in
expression C, or you can reduce the rate at which work items arrive into the queue. The
expression C is the same as the rip limit, and needs to be limited.
Kanban board
- [Instructor] Kanban is a visual process management approach. The Kanban Board is a
powerful and effective communication tool about the team's progress and
bottlenecks. Value Stream Mapping from lean principles is very easy to apply to
Kanban. This is because you can visualize your entire workflow or value stream via a
Kanban Board. In fact, Kanban is based on lean principles. Let's take a quick look at a
slightly improved version of Kanban Board. This Kanban Board just shows two
statuses, Planning/Coordination and Design. Each status or queue has a rip in it. The first
queue has reached its rip limit as indicated by the highlighted number four. Splitting
each column into two parts, doing and done, shows if the items in the queue are
stuck due to the next queue or if there is an issue in the same queue. In our example,
both the queues have reached their rip limit, but the first queue has all items done, and
the second queue cannot pull items from it. Also, the second queue has an item View
Trainer Profiles as blocked, and is shown by an X next to it. As you can see, the Kanban
Board communicates a lot of information about the process in a concise and a simple
way. In our case, the bottleneck is the design step and not the planning/coordination
step. One key feature of Kanban is it is a pull system. Items are pulled into a queue when
the queue has room for more items. There are no artificial time box boundaries like
sprint in Kanban. Work items are delivered when they are ready, and there is no need to
wait for an iteration time box to expire. So, the customer gets value delivered
continuously. Kanban is very lightweight, and besides the Kanban Board, the only other
practice used by most Kanban teams is daily standups.
Extreme programming
- [Instructor] Extreme Programming, or XP, is a fine-grained implementation-centric
approach. It can be viewed as a collection of software engineering practices. XP was
developed by Kent Beck in 1991, and it has its own set of values, rules, principles, and
practices. XP recommends a customer-driven iterative approach with short weekly
iterations, proposes a collaborative approach where customers provide
requirements, and the developers break the requirements into tasks and assigns tasks to
themselves. There's also a quarterly iteration that is a container for the weekly
iterations and allows teams to take a more macro-level view of the work and spend time
at a higher level on planning. XP teams believe in just-in-time design. So they start at a
simple high-level design and continue to evolve the design. They also refactor code
continuously to improve code quality and address technical debt. As developers build
code, they integrate their code with the code of other developers very frequently. XP
teams run short builds of 10 minutes or less. XP has nearly a dozen code practices. More
information about these practices is available at the Agile Alliance website. In this
lesson, we will cover two XP practices that are used by many teams even today. They are
pair programming and test-driven development. Let's review these practices in a little
more detail. In pair programming, two developers sit a computer terminal. One writes
code, while the other one helps the developer that writes code. The second person
constantly reviews the code as it is being developed, asks questions about the
implementation, and assists the other developer with coding suggestions. The two
developers switch roles periodically, and the process continues. This approach
introduces contiguous inspection of code and, hence, leads to code quality
improvement. Test-driven development means you do not write any code unless you
have a failed test for it. It is a three-step process. The first step is to write a test for a
function that is yet to be written. The code will not compile. The second step is to write
the function so you have just enough code to make sure the code compiles. The test
should fail. If the test passes, then the test is inadequate to verify any functionality and
should be refactored. The third and the last step is to complete coding of the
function to meet the requirements of the test. After you write each test, you refactor the
code to meet the requirements of the test. Then you write another test and refactor
code to pass the test. This process continues unless you have exhausted your list of
tests, and your code is complete. Test-driven development sometimes exposes poorly-
designed monolithic code. If you are struggling to write unique tests for a piece of
code, it may need refactoring to break it into more manageable chunks. Developers take
pride in their work. They test their code very thoroughly, but testers always find defects
in our code. Testers have no bias and make no assumptions about the code. That is why
they find problems that the developers don't find. This is what test-driven
development enables developers to do, is reduce their bias towards the code. Extreme
Programming has been criticized for being over-simplistic and ineffective for building
large systems, or for teams of inexperienced developers. But the two key takeaways of
XP, pair programming and test-driven development, are known to work for small and
large systems alike.

Spotify engineering model


- [Instructor] The Spotify model refers to a great cases study and culture rather than a
framework. Spotify's agile approach is unique in the sense that it does not follow any
single agile framework or methodology. In fact, Spotify does not even claim to have any
methodology. They define their approach as a culture of principles and values. Spotify
decided to not rely completely on one framework, but apply a combination of difficult
frameworks, methodologies, and practices to find their own way. Spotify loosely defines
their approach towards agility as the Spotify engineering culture. Spotify was a scrum
company around 2008. They ran into scaling issues with scrum and found that many
scrum practices were counterproductive. Instead of choosing a more formal scrum
scaling framework, they decided to define their own simple approach. They decided to
go back to the agile basics. For Spotify, agility is more important than any specific
framework or methodology. Spotify teams are organized into autonomous and small
teams called squads. Spotify does small, frequent releases with maximum automation. In
their own words, release should be routine, not drama. Their release approach uses what
is called release trains and feature toggles. A release train is a metaphor for a chain of
frequent releases, where each release includes a set of features. If a release does not
make it to a train, there is another one going out soon. And unfinished features are part
of release trains, but they are hidden from users by a technique called feature
toggle. This reduces the need for code branching and merging, something that creates
technical debt and is difficult to manage. Spotify has a culture of continuous
improvement. They use various techniques to optimize their output. Here is an example
of a technique called a Kata board, that is used for improvement. This is the Kata board
that was made popular by Toyota. A Kata board contains four quadrants. The top left
quadrant defines a problem the team is trying to solve. The bottom left quadrant
defines the perfect world as defined by the team. That world may be unrealistic, but it is
a direction not a destination. The top right quadrant defines a realistic
target. Something which is one step ahead of where they are or one step closer to their
ideal direction. And the bottom right quadrant defines the next three actions that will
move them closer to the next realistic target. Spotify has a culture of innovation. There
are hackathons that bring people with similar interests together, and they transform
their ideas into working features. Teams learn from each other. So if a tool or
experiment works well for a team, other teams learn from that team's experience.
Spotify engineering culture
- The Spotify Agile model has led to a unique personnel structure. Spotify teams are
very autonomous and have the freedom of selecting what works for them. Most teams
use a combination of Scrum and Kanban, but their core values are based on Lean
principles of avoiding waste and providing business value to customers quickly and
often. Teams choose how they execute their work and the tools. There are few high-
level controls in place, but for the most part, teams work independently. The smallest
team in this model is a squad. A squad usually comprises of one to seven persons. In our
example, there are four people in a squad. A squad is similar to a Scrum team in the
sense it is self-organizing and cross-functional. Squads have a lot of freedom in
deciding what to build and how to build. Squad team members are co-located and work
in a highly collaborative environment. A squad alone cannot build all features on their
own, so they are organized into tribes that represent a group of squads that work on a
related area of work. Tribe is a collection of squads working on one such area, such as
search or managing playlists. A chapter represents a group of team members with
specific area of expertise, such as user interface designer or database
administrator. Chapter leads are similar to line managers. This type of organization
allows someone with a specific expertise, like a database administrator, to switch
between squads without a change in line manager. Another team organization construct
is a guild, which is a loosely defined group of people with common interests. This type
of organization crosses a squads and tribes and brings people with any work-related
interest, or even a common hobby, together. Guilds have email lists and events to
interact and benefit from each other. The Spotify model is aimed at organizing people
together, so they benefit from each other and work together towards the shared vision
of the product, not create organizational hierarchies. Teams get help from many Agile
coaches that are servant leaders focused on making the team more productive and
helping teams remove obstacle in their work. Spotify has a failure-friendly culture. This is
important because creative teams have to experiment, and many experiments are
bound to fail. If people think that they will be punished for failing, they will not
experiment, and that will be counterproductive as far as their creativity is concerned. At
Spotify, team members are encouraged to experiment and learn from mistakes. One
advantage of being able to recover from failures is that changes have limited impact
and are reversible. Spotify has internal code repositories organized like open source
repositories. Different teams have visibility to the code produced by other teams and
can make changes to the other team's code if necessary. Spotify culture minimizes
handoffs between teams. Teams help each other, but have minimal handoffs. The
culture of self-service is practiced where teams enable other teams to help
themselves, instead of being dependent on or waiting for other teams. Bureaucracy is
minimized at all levels. Guilds is not desirable, but it is still preferred over
bureaucracy. The Spotify engineering culture is an excellent case study of an enterprise
that took all the fluff out of different Agile approaches and focused on Agile
principles. Their approach is people-focused and applies a subset of different Agile
approaches to their advantage. Their model is so simple, yet so effective. And Spotify is
very humble about their approach because they add that their approach may or may
not work for you, and you may need to modify the Spotify model for your organization's
unique needs. If you are curious about the Spotify model, I recommend this blog post.

DevOps: Background
- [Instructor] Building software involves close cooperation between two types of
teams. The first team comprising of developers and testers is the team that writes and
validates software. This is the dev of DevOps. But good software is useful only if it can
be deployed and released to your customers. The release part of software involves
knowledge of servers, middleware, network, storage configurations, and monitoring
techniques that most developers are not comfortable with. This is the ops part of
DevOps. Building efficient software delivery pipeline requires cooperation between dev
and ops. An actual organization is expected to have a fast and efficient software delivery
pipeline where you should be able to release working features to your customers as
quickly as possible. You need a culture of close cooperation between dev and ops to
make your software delivery pipeline efficient. This is what DevOps attempts to
achieve. Before we try to understand what DevOps is, let's try to establish the
context and understand the problem that DevOps helps to solve. Business stakeholders
have domain, market, and competitor knowledge, and come up with ideas and
features that they would like implemented. Their focus is the competitiveness of the
enterprise. Developers implement those features. QA people or quality assurance
people, test those changes, and report any issues that are found. Developers and QA
teams try their best to push stable product changes. Once the changes in the product
are deemed good enough to be deployed, the IT operations teams deploy those
changes to production or a production-like environment. The focus of operations
people is service-level arguments and the stability of those environments. One common
problem with deployments is that despite the best items to produce stable
changes, updates deployed to pre-production or production break things quite
often. This is because of the difference in environments caused by differences in
configurations, dependencies, data, and other aspects of server, storage, and network
settings. Also, the traditional deployment approaches have been very manual in
nature, and manual efforts are prone to errors. DevOps improves the co deployment
process. Agile implementation somewhat unify and align business, development, and
QA people into what is known as a cross-functional team. In their quest for enhanced
agility, agile teams started producing integrated product improvements more
frequently. The agile teams infused good agile practices such as test-driven
development and refactoring of code to produce stable releases. The code quality was
improved, but this would often break things in deployment. Agile approaches did not
explicitly leave out IT operations, but the way many agile teams worked created more
problems for the IT people. These agile implementations clearly missed the role of IT
operations to the agility of the enterprise. The IT operations people saw frequent
product improvements as a bigger nightmare because now the changes that could
break infrastructure were being pushed more frequently. This is what DevOps attempts
to solve. DevOps is the culture and practices of close cooperation between Dev and
Ops.

DevOps: Concepts
- [Instructor] The corporation of developers, testers, and IT operations, also known as
Dev and Ops, is not created in a vacuum, it's created by change in mindset, which
includes collective responsibility of the deployment pipeline by all those involved in the
process. It includes learning and implementing integration practices and
patterns. DevOps represents a culture of cooperation. DevOps also includes the use of
tools to automate various processes, ranging from Build automation to automated
acceptance testing to programmatically provisioning and configuring infrastructure and
automated monitoring of deployed applications. More importantly, DevOps enables
organizations to deploy fixes and new features to users quickly and keeps deployments
more stable. This reduces the lead time on the work items, which is perfectly aligned
with the Lean principles. Let's review the three key practices of DevOps. Continuous
integration or CI, is the practice, where developers frequently commit changes to a
centralized repository to trigger an automated build. The build process does multiple
things. It validates the code base for various things such as compilation errors, code
quality metrics, automated tests, static code analysis, missing dependencies, etc. The
build succeeds or fails depending on the results of the validation. Developers get quick
feedback on any issues and fix those issues on a priority basis. This practice keeps the
code base in a stable state. Continuous delivery is an extension of continuous
integration and is the capability to always keep a product in a stable state after every
change, so the product is potentially deployable. Continuous deployment takes things a
little further and means automatically deploying the product increment to production or
production-like environment. Continuous deployment may or may not follow
continuous delivery. Continuous integration, continuous delivery, and continuous
deployment together form the three key aspects of DevOps. DevOps relies on building a
culture of cooperation between developers and operations staff. Both Dev and Ops
people are responsible for the entire software delivery pipeline. Continuous integration,
delivery, and deployment help in the process of making DevOps successful. Successful
DevOps implementation relies on automation and tools. It is a good idea for
developers to become familiar with these tools. However, it is important to note that
tools by themselves do not make the software delivery pipeline efficient. Successful
DevOps means close cooperation between Dev and Ops staff. DevOps is based on Lean
principles, where you build a fast delivery pipeline with minimal wastage. A working
software feature should be available to your customers very quickly or come back to the
developer very quickly if it is not ready for release. DevOps is not a replacement for
agile. In fact, DevOps enables end to end agility with faster and stable deployments and
so compliments traditional agile approaches. An alternate way of looking at DevOps, is
that it is an agile approach that merges development and operations. DevOps is
definitely not an industry buzzword, many organizations are reaping the benefits of
DevOps practices. DevOps is not a tool or technology. It involves a cultural shift towards
building an efficient pipeline that produces stable and faster delivery and healthy
operational practices. DevOps does not mean developers have the complete freedom to
push a button to deploy to production. You need proper checks and balances in
place to make sure your production deployments are in a stable state.
CMMI overview
- [Instructor] CMMI, or Capability Maturity Model Integration is a process improvement
model applicable to a wide range of industries. It was developed by a collaboration of
industry experts at Carnegie Mellon University's Software Engineering Institute. Its latest
version is CMMI version 2.0, which was released in early 2018. Please note that CMMI is
a model not a standard. It is not prescriptive. In other words, it does not tell you what
should be done or who should do it but it provides a model and guidance on achieving
process maturity at your organization. For example, CMMI defines a level of process
maturity in terms of whether you have documented processes. It does not tell you what
your processes should be, so that is a key concept to know about CMMI. Organizations
undergo appraisals to be awarded CMMI maturity levels one through five. Let's discuss
what the CMMI maturity levels are. The CMMI website offers very detailed resources on
their processes. Here we see the five maturity levels. At maturity level one or initial
maturity level, there are no documented processes or the process documentation is not
known to most people. You may be producing good quality goods and services but the
processes are ad hoc and reactive. It's like cooking a dish by instinct, without following a
recipe book. If the dish you cook is not good, you try to adjust the taste by adding more
condiments. At this level, there is a lot of variation in the output. At maturity level two or
managed maturity level, the processes are documented at project level and are mostly
reactive. Organizations at this level have processes that are planned, performed,
measured, and controlled. And projects adhere to well-defined plans. This is like
working for a restaurant chain, where each location has its own recipe book and the
recipe book is followed very well. At maturity level three or defined maturity level, the
processes are defined at organizational level and followed with customization for
projects if needed. Processes are consistent and proactive. This is similar to a
restaurant, where recipes, packaging, and delivery processes are documented,
followed, and monitored very well. At maturity level four or quantitatively managed
maturity level, process quality and performance objectives are set quantitatively and
process quality and performance can be measured through statistical methods. So you
can measure how well your processes are performing. Due to the use of scientific
methods, your processes become predictable. At maturity level five or optimizing
maturity level, your organization is focused at continuous process improvement. How is
CMMI applicable to software development and agile approaches? CMMI version two
holds a lot of promise in this area. CMMI version 2.0 includes guidance on how agile
methods can be used to optimize their processes. Many agile approaches are
simple and very effective but have scaling problems. A process model that provides
organization wide agile scaling guidance may be very useful at an organizational
level. CMMI 2.0 has practices to move iteratively and incrementally towards optimal
performance, something which is perfectly aligned with the agile approaches. So I
recommend reviewing CMMI 2.0 for process improvement and organizational agility.
Six Sigma overview
- [Instructor] Lean and Six Sigma are both focused on minimizing waste and maximizing
customer value. Lean, when combined with Six Sigma, can improve customer
satisfaction and improve the quality of the goods and services that you produce. Let's
do a quick review of Six Sigma. Sig Sigma originated in manufacturing and used a
scientific and statistical approach to measure the quality of a process by tracking the
number of defects. The goal of Six Sigma is to reduce defects and keep the output of a
process within certain limits of a baseline. Six Sigma was developed at Motorola in the
1980s and popularized by Jack Welch and General Electric in the 90s. Six Sigma was
designed to improve the quality of the products. Its effectiveness lead to
widespread adoption by companies worldwide. Statistical methods calculate the Six
Sigma numbers and percentages. The sigma in Six Sigma indicates variation from a
desired reference point. Higher sigma level is better, so a process that is at a five
sigma will produce products with fewer defects as compared to a process at four
sigma. Higher sigma level is better because it means lesser number of defects.

Six Sigma and software


- [Instructor] How is Six Sigma applicable to software development? Manufacturing is
pure science where you can produce the same type of product repeatedly and
quantifying defect is very manageable. Software development is partly science and
partly art, and the procedure of measuring quality can be somewhat challenging. You're
not producing tangible products repeatedly, and to make things more
complex, customer expectations, team skill level, organizational culture, politics,
technology, et cetera, are different for different projects and products. So it becomes
extremely challenging to measure the Sigma level of a software development
process. Six Sigma generally works at a macro level, so when you are talking about Six
Sigma, you're talking about high-level processes, not how you build one software
product. The good news is that Six Sigma includes a process improvement cycle called
the DMAIC cycle, which may be useful for all software and non-software initiatives. Let's
review this cycle next. DMAIC is an acronym where each letter stands for the name of a
phase in this data-driven, five-phase process. The phases are: Define, this is where you
define a problem or an opportunity for improvement. Value stream mapping, which we
already covered in this course is one of the techniques for
identifying problems. Measure, in this phase you define how you will
measure performance of the activities in your process with the help of techniques such
as Pareto charts. Analyze, this is the phase when you find out the cause of variation or
defects. Techniques such as root cause analysis with fish bone diagram is one such
technique for finding the variations. Improve, in this phase, you apply multiple
techniques for reducing the effect of, or eliminating the reasons for defects and
variation. Affinity diagram is one such technique, where team members write down
suggestions for process improvement. And then group similar suggestions into
groups, then the team chalks out an action plan for implementing those ideas. Control,
an improved process needs monitoring for two reasons. One, the process has to stay at
its optimum level. Secondly, there's always scope for improvement. You apply
techniques such as control charts to measure output and keep outputs within allowed
limits of variation. The details of these techniques is out of scope for this course, there
are a number of lean six-sigma courses in our library. And I recommend Steve Brown's
Lean Six-Sigma Foundations Course, as a good starting point. You can apply the DMAIC
for software development process improvement. Perhaps, the software delivery process
needs improvement. You can also apply DMAIC cycle on your software delivery process
to find opportunities for more automation and implement faster delivery
cycles. Quantifying Six Sigma level for a software delivery process is
challenging. However, I have used a comparison technique to compare the number of
defects on two similar software products. The technique was not perfect, because of the
nature of the software development business. But, it did bring out a fact, that the
number of defects in one product was about fifty percent higher than the
other. Because, the product with higher defects was over-engineered. That was a clear,
tangible benefit of applying Six Sigma. We applied a small subset of DMAIC cycle
techniques to improve the quality of one of the two software products. Lean and Six
Sigma together, help you build highly optimized processes for productivity. Six Sigma
may not applicable in all cases, but the knowledge of DMAIC cycle and associated
techniques is surely something you want to know. The learning process itself is
fascinating, in my opinion. It will give you opportunities to achieve process improvement
as, and when applicable.

You might also like