Professional Documents
Culture Documents
In this video
Spiral model
- [Narrator] The Spiral development model was presented by Barry Boehm in his
research paper in 1986. It was one of the oldest software development models that
proposed an iterative development approach to building software. The development
approach was a mix of Waterfall and Iterative development models. The theme of this
approach was that building software iteratively leads to identification and easier
management of risks. If we were to visualize this model on a straight line, it is essentially
a series of Waterfalls. Each Waterfall contains four phases: Planning, Risk Analysis,
Engineering and Evaluation. So each Waterfall consists of four phases. When you start
an iteration, you build on top of the output of the previous iteration. One key thing to
note here is that each iteration is different from the previous iteration because, as you
build a system, you get better understanding of the requirements and continue to
mitigate risks. And last, but not the least, risk management is an integrated part of Spiral
Model. Let's review Spiral Model in a little more detail. The first phase is called
Planning. This phase includes requirements identification and analysis, identification of
stakeholders, and lifecycle objectives of the system being built. Also, you identify win
conditions, that would define what is considered success for your project. The second
phase is called Risk Analysis, and includes risk-related activities, such as risk
identification, risk prioritization and mitigation. This is the phase when you
build prototypes to mitigate risks. You may undertake activities such as
identifying alternate solutions, so you can reduce or avoid risks. In the early iterations,
you may have a simple prototype, but in the later iterations, you may build a complete
prototype, or a release candidate. The third phase is the Engineering phase, where you
perform software implementation activities, such as detail design, coding, unit and
acceptance testing and deployment. In the early iterations, you may have something as
simple as a design model, but in the later iterations, you may end up coding and
deploying a complete solution. Evaluation, this is the phase where you get your
stakeholder review and feedback, and plan the next iteration. So in the first Waterfall,
represented by this spiral, you may produce just a prototype for early feedback. The
second Waterfall is built on top of the first Waterfall, and may produce a release
candidate. The third Waterfall could produce a launch candidate. You can visualize Spiral
Model life cycle with the help of a graph. The four quadrants of the graph represent four
phases of the Spiral Model, starting with the top left quadrant and going clockwise. You
start with the first iteration and transition through the four phases iteratively. The top
right quadrant is focused on risk management, and most of the prototyping work is
done here. The bottom right quadrant is where you perform software engineering
activities. The bottom left quadrant ends with the customer's evaluation of the
product. X-axis here represent volume of approval by review. I would like to repeat here
that each iteration builds upon the output of the previous iteration. The Y-axis in this
graph represents cumulative cost of the product being built. As you can see, the size of
the spiral grows with time, and the overall shape of this graph led to this model being
named as Spiral Model. The Spiral software development model was a pioneering
approach at that time to identify that software development is inherently and iterative
process, something that modern Agile processes identified and advocated a few year
later, so this model was ahead of its time. One of the four phases in the model is Risk
Management. Prototyping is extensively used in the Spiral model, and staying focused
on risks reduces the chances of project failures.
Rational Unified Process: Overview
- [Instructor] Rational Unified Process, or RUP, was an attempt to come up with a
comprehensive iterative software development process. RUP is essentially a large pool
of knowledge. RUP consists of artifacts, processes, templates, phases, and disciplines. It
has detailed documentation, guidelines, sample artifacts, and deliverables. RUP is
defined to be a customizable process that would work for building small, medium, and
large software systems. Since the process is customizable, you can choose what you
want, and also customize your process with what are known as plugins. RUP has a
fascinating history. This is not a course on RUP, so don't worry if you're not familiar with
these terms. We will just skim the surface of RUP so we can learn something from
it. Before we go ahead and dissect RUP, let's review its history. Back in the early to mid
1990s, a company called Rational Software developed the Rational Unified Process as a
software process product. This was the era of object-oriented programming, and a set
of standard notations to represent an object-oriented system called UML, or Unified
Modeling Language, was becoming very popular. RUP was greatly influenced by object-
oriented analysis and design, and UML. The early effort to define RUP was led by
Philippe Kruchten, a Rational Software technical staff member, and his team. This effort
was combined with and influenced by other approaches from evangelists and subject
matter experts such as Grady Booch, famous for what is known as the Booch
method, Jim Rumbaugh's Object Technology Method, et cetera. In February 2003, IBM
acquired Rational Software. A few years later in 2006, IBM created a subset of
RUP, which is more agile centric and is called OpenUP. RUP has four phases: Inception,
elaboration, construction, and transition. These phases should not be confused with
requirements, analysis and design, development, and testing phases of the waterfall
model. Instead, view these phases as container for small waterfall-like iterations. In other
words, each phase has one or more iterations. Each RUP phase ends with a milestone. At
the end of the inception phase, you will have achieved what is known as the Lifecycle
Objectives Milestone. So you will have all stakeholders agree to what you are going to
build. The elaboration phase ends with base lining the architecture of your software
system, and this landmark is called Lifecycle Architecture Milestone. Construction ends
with achieving Initial Operational Capability Milestone, which is where your software
product is ready to be used by end users. Transition is the phase where you fine tune
your application to make it fully usable at production scale, and this is where you
achieve Production Release Milestone. Each phase has one or more iterations, or mini-
waterfalls. Activities that are logically similar are grouped into RUP disciplines. The
original RUP had six disciplines, called Business Modeling, Requirements, Analysis and
Design, Implementation, Test, and Deployment. They added three more later,
called Configuration and Change Management, Project Management, and Environment.
Rational Unified Process: Life cycle
- [Instructor] This is an example of Rational Unified Process life cycle for software
development. In our example, there is just one iteration in the inception phase which
results in the achievement of the life cycle objectives milestone. Two iterations in the
elaboration phase results in baselining the system architecture, or the life cycle
architecture milestone is achieved at the end of this phase. Three iterations in the
construction phase make the software product ready for end users, and so the initial
operational capability milestone is achieved. Finally, two iterations in the transition
phase makes the software optimized for production, and the production release
milestone is achieved at the end of this phase. Please note that the number of
iterations and the duration of iterations in a RUP cycle may vary from project to
project. Let's review a similar RUP life cycle but this time from the perspective of what
happens inside each iteration. Note that the inception iteration I want is heavy in
business modeling and requirements. Construction iterations are heavy on analysis and
design, implementation, testing, and deployment. Even within a phase, for example,
elaboration, the iterations may be different. For example, the first iteration in
construction phase is heavier in design as compared to the third iteration. If RUP was so
good, why is no one talking about RUP today? It's because RUP was a heavy
process with a lot of documentation. Many people disliked it because they thought they
had to follow RUP religiously and implement all processes and artifacts. RUP was
designed to be customizable, but it was a little too prescriptive and heavy which led to
its downfall. RUP had templates for everything, and each template was pretty
comprehensive. Unfortunately, it needed a significant amount of work to remove or
customize sections in such large templates. Working with RUP was like going to a buffet
with 500 items where it was a struggle to pick the 10 items you want to try. What is
there to learn from RUP? I think many modern software engineering practices taught by
RUP were pathbreaking and are still relevant. Let me mention the six RUP best
practices. The first one is develop iteratively. All modern agile processes and
frameworks recognize that software development is essentially an iterative and
incremental process. The second and third practices of RUP are manage requirements
and manage change. Agile organizations allow and encourage changes and
requirements and also make extra efforts to manage requirements and manage changes
of any kind. The fourth best practice of continuously verifying quality is the cornerstone
of continuous integration and deployment practices of DevOps. A picture speaks a
thousand words. Visual modeling of software with rational rows and other types of
software was pioneered by RUP. The sixth best practice of using component-based
architecture facilitates creation of software based on smaller and more manageable
components. Many customizable versions of RUP such as the ICONIX process and
Enterprise RUP were successfully applied to many small, medium, and large software
development projects.
2nd chapter
Dynamic systems development method (DSDM)
- [Narrator] DSDM stands for Dynamic System Development Method. It was developed
in 1994. This was the era where organizations were slowly going away from the waterfall
approach and construction industry like formal processors. When DSDM was
introduced, a new approach called RAD or Rapid Application Development was
becoming the norm. It was fairly easy to build mocked up screens, get quick customer
feedback and build system features very quickly. While this approach was very agile, it
was also a growing concern among organizations that Rapid Application
Development needed some structure or formal processors to maintain good software
quality. So a group of organizations built a framework for project delivery and project
management called DSDM. This framework had some fairly tight set of rules and was
designed to be compatible with IS0 9000 and prince 2, a project management
framework, very popular, mostly in Europe. The DSDM Consortium was formed in
1994 by a group of organizations. DSDM continued to be the project
development standard in most of Europe for the next several years. Please note that
they viewed software system building initiatives as projects, rather than products. In
2014, the DSDM Handbook was made available online, to the public. In October 2016,
the DSDM Consortium changed its name to Agile Business Consortium and they own
the DSDM Framework. You can view and download DSDM templates from the resources
section of the Agile Business Consortium website. At a very high level, the DSDM life
cycle consists of three phases - the pre-project phase is executed at portfolio and
executive management level where projects are identified. And funding commitments
are made. The second phase is called the project lifeline phase, consisting of several
sub-phases. Additional feasibility analysis is done for the projects identified in the
first two sub-phases and this is followed by sub-phases where system is built in
an iterative and incremental manner. The last phase is called post-project, which is kind
of a post-modern phase. This is when you determine if the expected benefits of the
project have been realized. Also, you execute activities around looking at ways of
improvement based on studying and discussing what went and what did not go
well during the execution of the project. The key thing to note here is that DSDM life
cycle is not a waterfall like life cycle. Different phases can be repeated and you can
iterate between phases. Dynamic system development method mindset can be distilled
to eight key principles. You can find a description of all eight principles on the Agile
Business Consortium website. For example, let's take a look at the eighth principle called
demonstrate control. The goal here is to have a highly visible plan and manage it
carefully and diligently so you do not deviate from your plan and keep things under
control. DSDM advocates the use of several proven practices. Many of them are very
applicable today. Firstly, it talks about timeboxing your activities, this means each
activity is assigned a maximum time limit, which cannot be extended. When the time
limit expires, you stop working on that activity. Timeboxing forces teams to stay
focused and avoid unnecessary refinements. All teams and organizations have
budgetary time and resource constraints, so requirements must be prioritized. DSDM
uses MoSCoW prioritization on requirements which categorize requirements into must
have, should have, could have and won't have. And last, but not the least, DSDM
advocates an iterative and incremental approach which is still the most acceptable way
of building software.
Feature-driven development (FDD)
- [Instructor] Feature-Driven Development or FDD is a lightweight and agile process. In
the world of FDD, software is viewed as a collection of working features. A feature is just
a piece of working functionality that has business value. Each FDD team tries to deliver
working software, which is composed of working features. At a high level, FDD is an
iterative process with five steps that will be explored later in this lesson. Let's review
examples of a feature. In a nutshell, a feature is a small piece of working
functionality, typically expressed as action, result, object, for example, in the
example, calculate monthly interest on account balance. Calculate is a verb, that
represents action. Monthly interest is the result. And account balance is the object on
which an action is performed. Let's review the FDD life cycle, which consists of the
following five steps, develop overall model, in this phase, a high level initial domain
model is built based on the team's understanding of the problem domain. This is like a
class diagram, which depicts most of the business concepts in your problem
domain and how they are related to each other. Build feature list, in this phase, the
domain is divided into subject areas and each subject area contains business activities or
work flows. Steps in these activities are identified as features. A guideline in FDD is to
keep features very fine grained so they can be implemented in just a short iteration of
two weeks or less. In the plan by feature phase, features are prioritized and a feature
implementation plan is developed. This includes, feature assignments to developers. In
the design by feature phase, more details are added to the classes in the domain
model. Sequence diagrams are developed to flesh out the implementation details. The
goal here is to have a design model that can be used to implement the
features. Thorough inspection of design is done to make sure the design will be able to
meet requirements. Build by feature, this is when the classes that implement features are
implemented, tested, inspected, and deployed. The last two steps, design by feature and
build by feature are executed in parallel by different feature owners for different sets of
features. FDD was developed in the era of unified modeling language. A set of notations
to represent an object oriented system. So a lot of UML and object oriented design is
used in FDD. Tracking status updates in any software development is challenging,
subjective, and prone to human judgment. We looked at the five steps of FDD. The last
two steps included multiple milestones. Each milestone in FDD has a percentage
completion assigned to it. So if a feature is done up to a milestone, the completion
percentage is the sum of completion percentages of all milestones up to that
milestone. In our example, the percentage completion is the sum of percentage
completions of milestones 1.1, 1.2, 2.1, and 2.2, and it is 60%. FDD is focused on
features, something that provides business value to customers. Team organization in
FDD is based around features and those teams, also called feature teams have the skills
to implement end to end functionality. This is a great practice implemented by
FDD. FDD scales very well because teams can work in parallel on multiple features and
can integrate or reuse their work whenever necessary. In fact, the earliest FDD
implementations where used to build large banking systems successfully. The ability to
track completion status is another great characteristic of FDD that was discussed earlier.
Scrum workflow
- [Narrator] Let's see how a Scrum Team builds a product incrementally and
iteratively. At the beginning of the Sprint, the Scrum Team comes together for an event
called Sprint Planning. In this event the Development Team works with the Product
Owner to select which items to work on. The Scrum Team also drafts a Sprint
Goal, which is just a few sentences that describe the high-level business goal of the
Sprint. Then the Development Team builds an initial plan on how they will implement
those Product Backlog items. The Scrum Master is available to facilitate, as and when
required, and the Product Owner is available to answer any questions. The Development
Team spends the time in the Sprint developing the features listed, testing to ensure that
they meet specifications, and compiling for a workable software deliverable. The Sprint
concludes with a Sprint Review and a Sprint Retrospective. This is another look at the
overall view of how the Scrum life cycle works. We already covered Sprint Planning, let's
take a deeper look at several key Scrum events: Daily Scrum, Sprint Review and the
Sprint Retrospective. The Development Team continues to meet each day in an
event called the Daily Scrum, which is timeboxed to 15 minutes. The Scrum Master and
Product Owner can participate in this event, but are not required attendees. This is a
Development Team hug or sync-up, not a status meeting. It is an opportunity for the
Development Team to measure if the team is on track to meet the Sprint Goal. Many
Scrum teams follow the three question format that means each Development Team
member answers three questions. One, "What did you do since the last Daily
Scrum?" Two, "What are you planning to do today?" Three, "Are there any
impediments to what you are trying to accomplish?" 3QF is not mandatory, and the
Development Team can conduct this event in whatever way they want. The
Development Team produces a Product Increment at the end of each Sprint, which is
reviewed in an event called Sprint Review. This is when the Scrum Team and a group of
stakeholders get together to inspect the Product Increment. This is an informal event,
and the feedback here is used to plan future Sprints, and not seek formal
acceptance. This event is timeboxed to four hours for a 30 day Sprint and must be
adjusted for shorter Sprints. The last event of the Sprint is called Sprint
Retrospective, when the Scrum Team inspects every aspect of their work other than the
Product Increment itself. They discuss, and come up with an Action Plan on how they
can do better with their processes, team communication, tools, skills, etcetera. The team
commits to a subset of their Action Plan items. This event is timeboxed to three hours
for a 30 day Sprint and must be adjusted for shorter Sprints. If I were to pick two key
aspects of Scrum, I would highlight these two: One, it is simple but comprehensive
framework to build products incrementally and deliver business value faster and more
often. Two, one key rule of Scrum is to produce a potentially shippable Product
Increment at the end of each Sprint.
Help/Fee
Lean overview and key concepts
- [Instructor] Lean originated in the post-World War II era. This was the time when many
economies were devastated and people had poor buying power. There was a lot of
demand to rebuild factories and manufacture products with maximum efficiency. Toyota
production system, or TPS, was one such efficient and popular system. It was a
combination of management style, working culture and production environment that
was aimed at reducing waste and manufacturing at optimal efficiency. Toyota's
manufacturing process inspired Mary and Tom Poppendieck's 2003 book, "Lean
Software Development," where they taught how many of these manufacturing
philosophies and techniques could be applied to software development. Lean principles
can be applied to software and hardware projects. Lean software development is a
collection of principles centered on the key tenant, to minimize waste. This is achieved
through processes that visualize production pipeline. Various techniques are used to
highlight bottlenecks and reveal efficiencies or inefficiencies within the system. A
common technique that can be used to identify waste is called value stream
mapping. Let's focus on it first. Think of a value stream map as a workflow or a sequence
of steps to produce a product or service with business value. For example, the sequence
of steps that starts with ordering a pizza to the point when the pizza is delivered, can be
called a value stream. A value stream contains steps that add value and, in a perfect
world, you'd like the value stream to have all the steps with value-add and no wasteful
step. This is a slightly over-simplified example of a manufacturing value stream
map. Information flows from right to left in the upper part of the diagram, where a
customer submits an order to an office, which orders raw materials from a supplier. This
is the information flow part of VSM, or value stream map, and would be produced for an
aggregate of customer orders. In the middle, is the materials flow that shows the
sequence of steps to produce the product. At the bottom is the time ladder, where the
lower staircases show actual processing time, whereas elevated staircases show lead
time, or the total time elapsed in a manufacturing step. As you can see, the total lead
time, or total elapsed time, of all steps combined in the materials flow is 130
minutes, whereas the real processing time is 65 minutes. So there is apparently 65
minutes of non-value-added time that could be reduced. So let's summarize what we
learned about value stream mapping. We use value stream mapping to identify areas of
improvement in a value stream. A typical manufacturing value stream map depicts all
steps at the macro level and includes three types of information, information flow,
material flow, and what is called a time ladder. The time ladder at the bottom of a value
stream map is critical because it identifies value-add time, as well as non-value-add
time. A value stream map, or VSM, contains information flows such as information
flowing from a customer to an office and a warehouse. You can also visualize flow of raw
material from a supplier to a factory, up to a shipping truck that transports the finished
product. A time ladder shows value-added time and total value consumed for producing
a product or service. We will see another example of VSM shortly.
Lean principles
- [Instructor] Lean software development is a collection of principles centered on
maximizing efficiencies and minimizing waste. Let's dig deeper into the core principles
of lean. Let's continue to review lean principles as applicable to software
development. Eliminate waste. Lean thinking teaches to think us from the perspective of
value addition for the customer. Any process or work that does not add value is
waste. Firstly, we need to understand what is value to our customer. Secondly, when we
produce software features, we need to produce that value. Nothing more, nothing
less. Adding more software features that the customer has not asked for is called gold
plating and is a waste. We need to understand implicit requirements but that is
something that should be learned with the help of continuous feedback from the
customer. Unnecessary processes or switching between tasks, also called context
switching, are other examples of waste. Amplify learning. HR practices are iterative with
delivery of business value, early and often. Software development is complex with the
potential for changing requirements and priorities. The recommended lean practice is to
constantly learn by getting regular and frequent feedback from stakeholders. Decide as
late as possible. This principle appears counterintuitive but is a good practice and
applicable to software development. Many methodologies were developed in the last
century that advocated detailed checklists that provided clear decisions and policies
defined early in the process. Making early decisions was fraught with risks because
everything changes in software development. Customer requirements change, market
conditions change, technologies evolve with time and change. Late decision is actually
an inappropriate term here. The appropriate term is last responsible moment. We need
to wait till we know all the facts and then make a decision at the last responsible
moment. You can only define a database tuning approach after you know what
database engine will handle your customers data volumes and workloads. Deliver as fast
as possible. When you deliver working features to customers quickly and often, you are
bringing value to the customer. Longer iterations have bigger risks of large number of
unidentified issues, changes in market conditions, and technology changes. Smaller
iterations are easier to manage. Empower the team. Agility is all about bottom-up
intelligence. Empowering a team fosters creativity and motivates the team to do
more. People that design and build software are knowledge workers and should be
given enough autonomy and support to bring out their creativity and sense of
responsibility to build great solutions. Build integrity in. Lean teams focus on making
sure the customer has good overall experience of the system. This is called perceived
integrity. That is how the system is delivering value and is usable and easy to
maintain. The other type of integrity is called conceptual integrity which works at a more
micro level. In this case, the system works like a well-oiled machine that comprises of
good, well-tested components. Lean teams build integrity right into their process of
building software system through refactoring, thorough testing, and consistent and
frequent communication with the customer. Building integrity should be an intrinsic
part of the entire process, not an afterthought. See the whole. Lean teams focus on
understanding the entire workflow and work on optimizing the entire process, rather
than improving a subset of the entire workflow. We are often tempted to improve just
one step in the process because each step improvement will improve the entire process
but in many cases, you cannot optimize the entire process without getting the big
picture and knowing the entire process. You can expedite software defect identification
process but that does not optimize the entire development, maybe because you have a
skills problem with the developers.
Kanban
- [Instructor] Kanban is a work process management methodology based on lean
principles. Kanban can be summarized in two key concepts. One, visualize your work,
and two, limit work in progress. Even though Kanban looks simple at first sight, it is a
very powerful approach based on advanced queuing utility. Let's see an example of how
a software development team applies Kanban. This is an example of a Kanban
board developed with Lean Kit's IDE. Each vertical lane in the board represents status of
a work item. In our Kanban board, we have four work item statuses: planning and
coordination, design, develop, and accept. Work items such as software features, user
stories, and defects are pulled into the board and processed through the board from left
to right. All completed work items are eventually pulled to the rightmost lane. Kanban is
a visual system because Kanban board gives you a quick summarized view of the work
in progress. Each lane can be viewed as a work queue and has a work in progress on rip
limit. The rip limit indicates the maximum number of items that should be added to that
lane. Limiting count of items in a queue is required to work at optimal efficiency. Rip
limits can be justified in arithmetic, as well as non-arithmetic terms. The non-arithmetic
lay person's explanation is that the rip limit in each lane is dependent on the number of
people working on items in that queue of that lane and their skills and experience
level. If you work on too many items simultaneously, you will end up working at sub-
optimal efficiency. The arithmetic explanation of placing work items is based on queuing
utility, centered at a formula defined in the 1950s called Little's Law. Let's do a quick
review of this formula. Even though Little's Law was originally stated in a totally different
context, it can be stated in the work management as a simple arithmetic expression as
follows. The number of items in a work queue is equal to average time spent on each
work item multiplied by the arrival rate of work items. Or the average time spent on
each work item is equal to number of items in a work queue divided by arrival rate of
work items. So if you wish to work at higher efficiency, that is spend less time per work
item, you can either reduce the number of work items in the work queue, or limit rip in
expression C, or you can reduce the rate at which work items arrive into the queue. The
expression C is the same as the rip limit, and needs to be limited.
Kanban board
- [Instructor] Kanban is a visual process management approach. The Kanban Board is a
powerful and effective communication tool about the team's progress and
bottlenecks. Value Stream Mapping from lean principles is very easy to apply to
Kanban. This is because you can visualize your entire workflow or value stream via a
Kanban Board. In fact, Kanban is based on lean principles. Let's take a quick look at a
slightly improved version of Kanban Board. This Kanban Board just shows two
statuses, Planning/Coordination and Design. Each status or queue has a rip in it. The first
queue has reached its rip limit as indicated by the highlighted number four. Splitting
each column into two parts, doing and done, shows if the items in the queue are
stuck due to the next queue or if there is an issue in the same queue. In our example,
both the queues have reached their rip limit, but the first queue has all items done, and
the second queue cannot pull items from it. Also, the second queue has an item View
Trainer Profiles as blocked, and is shown by an X next to it. As you can see, the Kanban
Board communicates a lot of information about the process in a concise and a simple
way. In our case, the bottleneck is the design step and not the planning/coordination
step. One key feature of Kanban is it is a pull system. Items are pulled into a queue when
the queue has room for more items. There are no artificial time box boundaries like
sprint in Kanban. Work items are delivered when they are ready, and there is no need to
wait for an iteration time box to expire. So, the customer gets value delivered
continuously. Kanban is very lightweight, and besides the Kanban Board, the only other
practice used by most Kanban teams is daily standups.
Extreme programming
- [Instructor] Extreme Programming, or XP, is a fine-grained implementation-centric
approach. It can be viewed as a collection of software engineering practices. XP was
developed by Kent Beck in 1991, and it has its own set of values, rules, principles, and
practices. XP recommends a customer-driven iterative approach with short weekly
iterations, proposes a collaborative approach where customers provide
requirements, and the developers break the requirements into tasks and assigns tasks to
themselves. There's also a quarterly iteration that is a container for the weekly
iterations and allows teams to take a more macro-level view of the work and spend time
at a higher level on planning. XP teams believe in just-in-time design. So they start at a
simple high-level design and continue to evolve the design. They also refactor code
continuously to improve code quality and address technical debt. As developers build
code, they integrate their code with the code of other developers very frequently. XP
teams run short builds of 10 minutes or less. XP has nearly a dozen code practices. More
information about these practices is available at the Agile Alliance website. In this
lesson, we will cover two XP practices that are used by many teams even today. They are
pair programming and test-driven development. Let's review these practices in a little
more detail. In pair programming, two developers sit a computer terminal. One writes
code, while the other one helps the developer that writes code. The second person
constantly reviews the code as it is being developed, asks questions about the
implementation, and assists the other developer with coding suggestions. The two
developers switch roles periodically, and the process continues. This approach
introduces contiguous inspection of code and, hence, leads to code quality
improvement. Test-driven development means you do not write any code unless you
have a failed test for it. It is a three-step process. The first step is to write a test for a
function that is yet to be written. The code will not compile. The second step is to write
the function so you have just enough code to make sure the code compiles. The test
should fail. If the test passes, then the test is inadequate to verify any functionality and
should be refactored. The third and the last step is to complete coding of the
function to meet the requirements of the test. After you write each test, you refactor the
code to meet the requirements of the test. Then you write another test and refactor
code to pass the test. This process continues unless you have exhausted your list of
tests, and your code is complete. Test-driven development sometimes exposes poorly-
designed monolithic code. If you are struggling to write unique tests for a piece of
code, it may need refactoring to break it into more manageable chunks. Developers take
pride in their work. They test their code very thoroughly, but testers always find defects
in our code. Testers have no bias and make no assumptions about the code. That is why
they find problems that the developers don't find. This is what test-driven
development enables developers to do, is reduce their bias towards the code. Extreme
Programming has been criticized for being over-simplistic and ineffective for building
large systems, or for teams of inexperienced developers. But the two key takeaways of
XP, pair programming and test-driven development, are known to work for small and
large systems alike.
DevOps: Background
- [Instructor] Building software involves close cooperation between two types of
teams. The first team comprising of developers and testers is the team that writes and
validates software. This is the dev of DevOps. But good software is useful only if it can
be deployed and released to your customers. The release part of software involves
knowledge of servers, middleware, network, storage configurations, and monitoring
techniques that most developers are not comfortable with. This is the ops part of
DevOps. Building efficient software delivery pipeline requires cooperation between dev
and ops. An actual organization is expected to have a fast and efficient software delivery
pipeline where you should be able to release working features to your customers as
quickly as possible. You need a culture of close cooperation between dev and ops to
make your software delivery pipeline efficient. This is what DevOps attempts to
achieve. Before we try to understand what DevOps is, let's try to establish the
context and understand the problem that DevOps helps to solve. Business stakeholders
have domain, market, and competitor knowledge, and come up with ideas and
features that they would like implemented. Their focus is the competitiveness of the
enterprise. Developers implement those features. QA people or quality assurance
people, test those changes, and report any issues that are found. Developers and QA
teams try their best to push stable product changes. Once the changes in the product
are deemed good enough to be deployed, the IT operations teams deploy those
changes to production or a production-like environment. The focus of operations
people is service-level arguments and the stability of those environments. One common
problem with deployments is that despite the best items to produce stable
changes, updates deployed to pre-production or production break things quite
often. This is because of the difference in environments caused by differences in
configurations, dependencies, data, and other aspects of server, storage, and network
settings. Also, the traditional deployment approaches have been very manual in
nature, and manual efforts are prone to errors. DevOps improves the co deployment
process. Agile implementation somewhat unify and align business, development, and
QA people into what is known as a cross-functional team. In their quest for enhanced
agility, agile teams started producing integrated product improvements more
frequently. The agile teams infused good agile practices such as test-driven
development and refactoring of code to produce stable releases. The code quality was
improved, but this would often break things in deployment. Agile approaches did not
explicitly leave out IT operations, but the way many agile teams worked created more
problems for the IT people. These agile implementations clearly missed the role of IT
operations to the agility of the enterprise. The IT operations people saw frequent
product improvements as a bigger nightmare because now the changes that could
break infrastructure were being pushed more frequently. This is what DevOps attempts
to solve. DevOps is the culture and practices of close cooperation between Dev and
Ops.
DevOps: Concepts
- [Instructor] The corporation of developers, testers, and IT operations, also known as
Dev and Ops, is not created in a vacuum, it's created by change in mindset, which
includes collective responsibility of the deployment pipeline by all those involved in the
process. It includes learning and implementing integration practices and
patterns. DevOps represents a culture of cooperation. DevOps also includes the use of
tools to automate various processes, ranging from Build automation to automated
acceptance testing to programmatically provisioning and configuring infrastructure and
automated monitoring of deployed applications. More importantly, DevOps enables
organizations to deploy fixes and new features to users quickly and keeps deployments
more stable. This reduces the lead time on the work items, which is perfectly aligned
with the Lean principles. Let's review the three key practices of DevOps. Continuous
integration or CI, is the practice, where developers frequently commit changes to a
centralized repository to trigger an automated build. The build process does multiple
things. It validates the code base for various things such as compilation errors, code
quality metrics, automated tests, static code analysis, missing dependencies, etc. The
build succeeds or fails depending on the results of the validation. Developers get quick
feedback on any issues and fix those issues on a priority basis. This practice keeps the
code base in a stable state. Continuous delivery is an extension of continuous
integration and is the capability to always keep a product in a stable state after every
change, so the product is potentially deployable. Continuous deployment takes things a
little further and means automatically deploying the product increment to production or
production-like environment. Continuous deployment may or may not follow
continuous delivery. Continuous integration, continuous delivery, and continuous
deployment together form the three key aspects of DevOps. DevOps relies on building a
culture of cooperation between developers and operations staff. Both Dev and Ops
people are responsible for the entire software delivery pipeline. Continuous integration,
delivery, and deployment help in the process of making DevOps successful. Successful
DevOps implementation relies on automation and tools. It is a good idea for
developers to become familiar with these tools. However, it is important to note that
tools by themselves do not make the software delivery pipeline efficient. Successful
DevOps means close cooperation between Dev and Ops staff. DevOps is based on Lean
principles, where you build a fast delivery pipeline with minimal wastage. A working
software feature should be available to your customers very quickly or come back to the
developer very quickly if it is not ready for release. DevOps is not a replacement for
agile. In fact, DevOps enables end to end agility with faster and stable deployments and
so compliments traditional agile approaches. An alternate way of looking at DevOps, is
that it is an agile approach that merges development and operations. DevOps is
definitely not an industry buzzword, many organizations are reaping the benefits of
DevOps practices. DevOps is not a tool or technology. It involves a cultural shift towards
building an efficient pipeline that produces stable and faster delivery and healthy
operational practices. DevOps does not mean developers have the complete freedom to
push a button to deploy to production. You need proper checks and balances in
place to make sure your production deployments are in a stable state.
CMMI overview
- [Instructor] CMMI, or Capability Maturity Model Integration is a process improvement
model applicable to a wide range of industries. It was developed by a collaboration of
industry experts at Carnegie Mellon University's Software Engineering Institute. Its latest
version is CMMI version 2.0, which was released in early 2018. Please note that CMMI is
a model not a standard. It is not prescriptive. In other words, it does not tell you what
should be done or who should do it but it provides a model and guidance on achieving
process maturity at your organization. For example, CMMI defines a level of process
maturity in terms of whether you have documented processes. It does not tell you what
your processes should be, so that is a key concept to know about CMMI. Organizations
undergo appraisals to be awarded CMMI maturity levels one through five. Let's discuss
what the CMMI maturity levels are. The CMMI website offers very detailed resources on
their processes. Here we see the five maturity levels. At maturity level one or initial
maturity level, there are no documented processes or the process documentation is not
known to most people. You may be producing good quality goods and services but the
processes are ad hoc and reactive. It's like cooking a dish by instinct, without following a
recipe book. If the dish you cook is not good, you try to adjust the taste by adding more
condiments. At this level, there is a lot of variation in the output. At maturity level two or
managed maturity level, the processes are documented at project level and are mostly
reactive. Organizations at this level have processes that are planned, performed,
measured, and controlled. And projects adhere to well-defined plans. This is like
working for a restaurant chain, where each location has its own recipe book and the
recipe book is followed very well. At maturity level three or defined maturity level, the
processes are defined at organizational level and followed with customization for
projects if needed. Processes are consistent and proactive. This is similar to a
restaurant, where recipes, packaging, and delivery processes are documented,
followed, and monitored very well. At maturity level four or quantitatively managed
maturity level, process quality and performance objectives are set quantitatively and
process quality and performance can be measured through statistical methods. So you
can measure how well your processes are performing. Due to the use of scientific
methods, your processes become predictable. At maturity level five or optimizing
maturity level, your organization is focused at continuous process improvement. How is
CMMI applicable to software development and agile approaches? CMMI version two
holds a lot of promise in this area. CMMI version 2.0 includes guidance on how agile
methods can be used to optimize their processes. Many agile approaches are
simple and very effective but have scaling problems. A process model that provides
organization wide agile scaling guidance may be very useful at an organizational
level. CMMI 2.0 has practices to move iteratively and incrementally towards optimal
performance, something which is perfectly aligned with the agile approaches. So I
recommend reviewing CMMI 2.0 for process improvement and organizational agility.
Six Sigma overview
- [Instructor] Lean and Six Sigma are both focused on minimizing waste and maximizing
customer value. Lean, when combined with Six Sigma, can improve customer
satisfaction and improve the quality of the goods and services that you produce. Let's
do a quick review of Six Sigma. Sig Sigma originated in manufacturing and used a
scientific and statistical approach to measure the quality of a process by tracking the
number of defects. The goal of Six Sigma is to reduce defects and keep the output of a
process within certain limits of a baseline. Six Sigma was developed at Motorola in the
1980s and popularized by Jack Welch and General Electric in the 90s. Six Sigma was
designed to improve the quality of the products. Its effectiveness lead to
widespread adoption by companies worldwide. Statistical methods calculate the Six
Sigma numbers and percentages. The sigma in Six Sigma indicates variation from a
desired reference point. Higher sigma level is better, so a process that is at a five
sigma will produce products with fewer defects as compared to a process at four
sigma. Higher sigma level is better because it means lesser number of defects.