You are on page 1of 16

Object Oriented Analysis and Design Software Industries Best Practices

Software Industries Best Practices


The Rational Unified Process shows how you can apply best practices of software engineering, and
how you can use tools to automate your software engineering process.

The best practices are: -

1. Develop Iteratively
2. Manage Requirements
3. Use Components Architecture
4. Model Visually (UML)
5. Continuously Verify Quality
6. Manage Change

1. Develop Iteratively

To mitigate risks, develop incrementally in an iterative fashion. Each iteration


results in an executable release.

Compiled by Roshan Chitrakar 1 of 16


Object Oriented Analysis and Design Software Industries Best Practices

1.1. What is Iterative Development?


A project using iterative development has a lifecycle consisting of several iterations. An iteration
incorporates a loosely sequential set of activities in business modeling, requirements, analysis and
design, implementation, test, and deployment, in various proportions depending on where in the
development cycle the iteration is located. Iterations in the inception and elaboration phases focus on
management, requirements, and design activities; iterations in the construction phase focus on
design, implementation, and test; and iterations in the transition phase focus on test and deployment.
Iterations should be managed in a timeboxed fashion, that is, the schedule for an iteration should be
regarded as fixed, and the scope of the iteration's content actively managed to meet that schedule.

1.2. Why Develop Iteratively?


An initial design is likely to be flawed with respect to its key requirements. Late discovery of design
defects results in costly over-runs and, in some cases, even project cancellation.

All projects have a set of risks involved. The earlier in the lifecycle you can verify that you've
avoided a risk, the more accurate you can make your plans. Many risks are not even discovered until
you've attempted to integrate the system. Yo u will never be able to predict all risks regardless of how
experienced the development team is.

In a waterfall lifecycle, you can't verify whether you have stayed clear of a risk until
late in the lifecycle.

In an iterative lifecycle, you select what increment to develop in an iteration based on a list of key risks. Since
the iteration produces a tested executable, you can verify whether you have mitigated the targeted risks or not.

Compiled by Roshan Chitrakar 2 of 16


Object Oriented Analysis and Design Software Industries Best Practices

1.3. Benefits of an Iterative Approach


An iterative approach is generally superior to a linear or waterfall approach for many different
reasons.

Risks are mitigated earlier, because elements are integrated progressively.


Changing requirements and tactics are accommodated.
Improving and refining the product is facilitated, resulting in a more robust product.
Organizations can learn from this approach and improve their process.
Reusability is increased.

1.3.1. Mitigating risks

An iterative approach lets you mitigate risks earlier, because many risks are only addressed and
discovered during integration. As you unroll the early iteration, you go through all disciplines,
exercising many aspects of the project: tools, off- the-shelf software, people skills, and so on.
Perceived risks may prove not to be risks, and new, unsuspected risks will show up.

Integration is not one "big bang" at the end—elements are incorporated progressively. In reality, the
iterative approach is an almost continuous integration. What used to be a long, uncertain, and
difficult time—taking up to 40% of the total effort at the end of a project—and what was hard to plan
accurately, is divided into six to nine smaller integrations that start with far fewer elements to
integrate.

1.3.2. Accommodating changes

The iterative approach lets you take into account changing requirements as they will normally
change along the way.

Changes in requirements and requirements "creep" have always been primary sources of trouble for a
project, leading to late delivery, missed schedules, unsatisfied customers, and frustrated developers.
Twenty- five years ago, Fred Brooks wrote: "Plan to throw one away, you will anyhow." Users will
change their mind along the way. This is human nature. Forcing users to accept the system as they
originally imagined it is wrong. They change their minds because the context is changing—they
learn more about the environment and the technology, and they see intermediate demonstration of
the product as it's being developed.

An iterative lifecycle provides management with a way of making tactical changes to the product.
For example, to compete with existing products, you may decide to release a reduced- functionality
product earlier to counter a move by a competitor, or you may adopt another vendor for a given
technology.

Iteration also allows for technological changes along the way. If some technology changes or
becomes a standard as new technology appears, the project can take advantage of it. This is
particularly the case for platform changes and lower- level infrastructure changes.

1.3.3. Reaching higher quality

An iterative approach results in a more robust architecture because errors are corrected over several
iterations. Early flaws are detected as the product matures during the early iterations. Performance
bottlenecks are discovered and can be reduced, as opposed to being discovered on the eve of
delivery.

Compiled by Roshan Chitrakar 3 of 16


Object Oriented Analysis and Design Software Industries Best Practices

Developing iteratively, as opposed to running tests once toward the end of the project, results in a
more thoroughly tested product. Critical functions have had many opportunities to be tested over
several iterations, and the tests themselves, and any test software, have had time to mature.

1.3.4. Learning and improving

Developers can learn along the way, and the various competencies and specialties are more fully
employed during the whole lifecycle.

Rather than waiting a long time just making plans and honing their skills, testers start testing early,
technical writing starts early, and so on. The need for additional training or external help can be
detected in the early iteration assessment reviews.

The process itself can be improved and refined as it develops. The assessment at the end of an
iteration not only looks at the status of the project from a product-schedule perspective, but also
analyzes what needs to be changed in the organization and the process to perform better in the next
iteration.

1.3.5. Increasing reuse

An iterative lifecycle facilitates reuse. It's easier to identify common parts as they are partially
designed or implemented, compared to having to identify all commonality up front.

Identifying and developing reusable parts is difficult. Design reviews in early iterations allow
software architects to identify unsuspected, potential reuse, and subsequent iterations allow them to
further develop and mature this common code.

Using an iterative approach makes it easier to take advantage of commercial-off-the-shelf products.


You have several iterations to select them, integrate them, and validate that they fit with the
architecture.

2. Manage Requirements
2.1. What is Requirements Management?

Requirements management is a systematic approach to finding, documenting, organizing, and


tracking a system's changing requirements.

We define a requirement as "a condition or capability to which the system must conform".

We formally define requirements management as a systematic approach to both:

eliciting, organizing, and documenting the requirements of the system


establishing and maintaining agreement between the customer and the project team on the
system's changing requirements

Keys to effective requirements management include maintaining a clear statement of the


requirements, along with applicable attributes and traceability to other requirements and other project
artifacts.

Compiled by Roshan Chitrakar 4 of 16


Object Oriented Analysis and Design Software Industries Best Practices

Collecting requirements may sound like a rather straightforward task. In reality, however, projects
run into difficulties for the following reasons:

Requirements are not always obvious, and can come from many sources.
Requirements are not always easily or clearly expressed in words.
There are many different types of requirements at different levels of detail.
The number of requirements can become unmanageable if they're not controlled.
Requirements are related to one another and also to other deliverables of the software
engineering process.
Requirements have unique properties or property values. For example, they are not
necessarily equally important nor equally easy to meet.
There are many interested parties, which means requirements need to be managed by cross-
functional groups of people.
Requirements change.

So, what skills do you need to develop in your organization to help you manage these difficulties?
We've learned that the following skills are important to master:

Analyzing the problem


Understanding stakeholder needs
Defining the system
Managing the scope of the project
Refining the system definition
Managing changing requirements

2.1.1. Analyzing the problem

Problems are analyzed to understand problems and initial stakeholder needs, and to propose high-
level solutions. It's an act of reasoning and analysis to find "the problem behind the problem". During
problem analysis, agreement is gained on what the real problems are and on who the stakeholders
are. From a business perspective you also define the boundaries of the solution and any business
constraints on the solution. The business case for the project must also be analyzed so there is a good
understanding of what return is expected on the investment made in the system being built.

2.1.2. Understanding stakeholder needs

Requirements come from many sources; for example, customers, partners, end users, and domain
experts. You need to know how to determine what the best sources should be, how to access those
sources, and how to elicit information from them most effectively. The individuals who provide the
primary sources for this information are referred to as stakeholders in the project.

If you’re developing an information system to be used internally within your company, you may
include people with end-user experience and business domain expertise in your development team.
Very often you will start the discussions at a business model level rather than at a system level. If
you’re developing a product to be sold to a specific marketplace, you may make extensive use of
your marketing people to better understand the needs of customers in that market.

Elicitation activities may occur using techniques such as interviews, brainstorming, conceptual
prototyping, questionnaires, and competitive analysis. The result of the elicitation is a list of requests
or needs that are described textually and graphically, and that have been given priority relative to one
another.

Compiled by Roshan Chitrakar 5 of 16


Object Oriented Analysis and Design Software Industries Best Practices

2.1.3. Defining the system

Defining the system means translating and organizing the understanding of stakeholder needs into a
meaningful description of the system to be built. Early in system definition, decisions are made about
what constitutes a requirement, documentation format, language formality, degree of requirements
specificity (how many and in what detail), request priority and estimated effort (two very different
valuations usually determined by different people in separate exercises), technical and management
risks, and initial scope. Part of this activity may include early prototypes and design models directly
related to the most important stakeholder requests. The outcome of system definition is a description
of the system that uses both natural language and graphical representations.

2.1.4. Managing the scope of the project

To efficiently run a project, you need to carefully prioritize the requirements based on input from all
stakeholders and manage its scope. Too many projects suffer from developers working on so called
"Easter eggs" (features the developer finds interesting and challenging), rather than early focusing on
tasks that mitigate a risk to the project or stabilize the architecture of the application. Make sure that
you resolve or mitigate risks in a project as early as possible, by developing your system
incrementally, carefully choosing requirements for each increment that mitigates known risks in the
project. This means you need to negotiate the scope of each iteration with the project's stakeholders.
Typically this requires good skills in managing expectations of the output from the project in its
different phases. You also need to control the sources of the requirements, how the deliverables of
the project look, as well as the development process itself.

2.1.5. Refining the system definition

The detailed definition of the system needs to be presented in such a way that your stakeholders can
understand, agree to, and sign off on them. It needs to cover not only functionality, but also
compliance with any legal or regulatory requirements, usability, reliability, performance,
supportability, and maintainability. A frequent error is believing that what you feel is complex to
build, needs to have a complex definition. This leads to difficulties in explaining the purpose of the
project and the system. People may be impressed, but they will not give good input because they
don’t understand. Special attention needs to be given to understanding the audience for whom the
artifacts are being produced; often, different kinds of description are needed for different audiences.

We have seen that the use-case methodology, often in combination with simple visual prototypes, is
a very efficient way of communicating the purpose and defining the details of the system. Use cases
help put requirements into a context; they tell a story of how the system will be used.

Another component of the detailed definition of the system is to state how the system should be
tested. Test plans and definitions of what tests to perform tell us what system capabilities will be
verified.

2.1.6. Managing changing requirements

No matter how carefully you've defined your requirements, there will always be things that change.
What makes changing requirements complex to manage is not only that a changed requirement
means that time has to be spent on implementing a particular new feature, but also that a change to
one requirement may have an impact on other requirements. You need to make sure that you give
your requirements a structure that is resilient to changes, and you need to use traceability links to
represent dependencies between requirements and other artifacts of the development lifecycle.
Managing change includes such activities as establishing a baseline, determining which

Compiled by Roshan Chitrakar 6 of 16


Object Oriented Analysis and Design Software Industries Best Practices

dependencies are important to trace, establishing traceability between related items, and
implementing change control.

2.2. How is Development Driven by Use Cases?

Our recommended method for organizing your functional requirements is using use cases. Instead of
a bulleted list of requirements, organize them in a way that tells a story of how someone may use the
system. This provides for greater completeness and consistency, and also provides a better
understanding of the importance of a requirement from a user's perspective.

From a traditional object-oriented system model, it's often difficult to tell how a system does what it's
supposed to do. This difficulty stems from the lack of a "red thread" through the system when it
performs certain tasks. In the Rational Unified Process (RUP), use cases are that thread because they
define the behavior performed by a system. Use cases are not part of traditional object orientation,
but their importance has become even more apparent. This is further emphasized by the fact that use
cases are part of the Unified Modeling Language.

The RUP employs a "use-case driven approach", which means that use cases defined for a system are
the basis for the entire development process.

Use cases play a part in several disciplines.

The concept of use cases can be used to represent business processes, as defined in the
business modeling discipline. We call this use-case variant a "business use case".
The use-case model is one of the key resulting artifacts from the requirements discipline. The
use cases to model identifies what the system needs to do from the user's point of view. Use
cases constitute an important fundamental concept that must be acceptable to both the
customer, developers and testers of the system.
During analysis & design, use cases are realized in a design model. You create use-case
realizations, which describe how the use case is supported by the design in terms of
interacting objects in the design model. This model describes, in terms of design objects, the
different parts of the system that will need to be implemented, and how the parts need to
interact to support the required use cases.
During implementation, the design model acts as the implementation specification. Because
use cases are the basis for the design model, they are implemented in terms of collaborating
design classes.
During test, the use cases provide the necessary scenarios that constitute the key basis for
identifying functional test scenarios. Those test scenarios are used to derive test cases and
test scripts; the functionality of the system is verified by executing test scenarios that exercise
each use case.
In the project management discipline, use cases are used as a basis for planning iterative
development.
In the deployment discipline, use cases form a foundation for what is described in user's
manuals. Use cases can also be used to define ordering units of the product. For example, a
customer can get a system configured with a particular mix of use cases.

Compiled by Roshan Chitrakar 7 of 16


Object Oriented Analysis and Design Software Industries Best Practices

3. Use Component Architectures


3.1. What Does Component Architecture Mean?
Components are cohesive groups of code, in source or executable form, with well-defined interfaces
and behaviors that provide strong encapsulation of their contents, and are, therefore, replaceable.
Architectures based around components tend to reduce the effective size and complexity of the
solution, and so are more robust and resilient.

3.2. Architectural Emphasis


Use cases drive the Ratio nal Unified Process
(RUP) end-to-end over the whole lifecycle, but
the design activities are centered around the
notion of system architecture and, for software-
intensive systems, software architecture. The main
focus of the early iterations of the process—
mostly in the elaboration phase—is to produce
and validate a software architecture, which in the
initial development cycle takes the form of an
executable architectural prototype that gradually
evolves to become the final system in later Component-based architecture with layers
iterations.

By executable architecture, we mean a partial implementation of the system built to demonstrate


selected system functions and properties, in particular those satisfying non- functional requirements.
The purpose of executable architecture is to mitigate risks related to performance, throughput,
capacity, reliability, and other "ilities", so that the complete functional capability of the system may
be added in the construction phase on a solid foundation, without fear of breakage.

The RUP provides a methodical, systematic way to design, develop, and validate an architecture. It
offers templates for architectural description around the concepts of multiple architectural views, and
for the capture of architectural style, design rules, and constraints. The Analysis and Design
discipline contains specific activities aimed at identifying architectural constraints and architecturally
significant elements, as well as guidelines on how to make architectural choices. The management
process shows how the planning of the early iterations takes into account the design of an
architecture and the resolution of the major technical risks.

Architecture is important for several reasons:

It lets you gain and retain intellectual control over the project, to manage its complexity and
to maintain system integrity.

A complex system is more than the sum of its parts; more than a succession of small independent
tactical decisions. It must have some unifying, coherent structure to organize those parts
systematically and it must provide precise rules on how to grow the system without having its
complexity "explode" beyond human understanding.

The architecture establishes the means for improved communication and understanding throughout
the project by establishing a common set of references, a common vocabulary with which to discuss
design issues.

Compiled by Roshan Chitrakar 8 of 16


Object Oriented Analysis and Design Software Industries Best Practices

It is an effective basis for large-scale reuse.

By clearly articulating the major components and the critical interfaces between them, an
architecture lets you reason about reuse—both internal reuse, which is the identification of common
parts, and external reuse, which is the incorporation of ready- made, off- the-shelf components.
However, it also allows reuse on a larger scale: the reuse of the architecture itself in the context of a
line of products that addresses different functionality in a common domain.

It provides a basis for project management.

Planning and staffing are organized along the lines of major components. Fundamental structural
decisions are taken by a small, cohesive architecture team; they are not distributed. Development is
partitioned across a set of small teams, each responsible for one or several parts of the system.

3.3. Component-based Development


A software component can be defined as a nontrivial piece of software, a module, a package, or a
subsystem, all of which fulfill a clear function, have a clear boundary, and can be integrated in a
well-defined architecture. It's the physical realization of an abstraction in your design.

Components come from different places:

In defining a very modular architecture, you identify, isolate, design, develop, and test well-
formed components. These components can be individually tested and gradually integrated to
form the whole system.
Furthermore, some of these components can be developed to be reusable, especially the
components that provide common solutions to a wide range of common problems. These
reusable components, which may be larger than just collections of utilities or class libraries,
form the basis of reuse within an organization, increasing overall software productivity and
quality.
More recently, the advent of commercially successful, component infrastructures—such as
CORBA, the Internet, ActiveX, and JavaBeans—trigger a whole industry of off- the-shelf
components for various domains, allowing you to buy and integrate components rather than
developing them all in- house.

The first point in the preceding list exploits the old concepts of modularity and encapsulation,
bringing those concepts underlying object-oriented technology a step further. The last two points in
the list shift software development from programming software a line at time, to composing software
by assembling components.

The RUP supports component-based development in these ways:

The iterative approach allows you to progressively identify components, and decide which
ones to develop, which ones to reuse, and which ones to buy.
The focus on software architecture allows you to articulate the structure—the components
and the ways in which they integrate—which include the fundamental mechanisms and
patterns by which they interact.
Concepts, such as packages, subsystems, and layers, are used during Analysis & Design to
organize components and to specify interfaces.
Testing is first organized around components, then gradually around larger sets of integrated
components.

Compiled by Roshan Chitrakar 9 of 16


Object Oriented Analysis and Design Software Industries Best Practices

4. Model Visually (UML)

Visual modeling raises the level of abstraction

4.1. What is Visual Modeling?


Visual modeling is the use of semantically rich, graphical and textual design notations to capture
software designs. A notation, such as UML, allows the level of abstraction to be raised, while
maintaining rigorous syntax and semantics. In this way, it improves communication in the design
team, as the design is formed and reviewed, allowing the reader to reason about the design, and it
provides an unambiguous basis for implementation.

4.2. Why Do We Model?


A model is a simplified view of a system. It shows the essent ials of the system from a particular
perspective and hides the non-essential details. Models can help in the following ways:

aiding understanding of complex systems


exploring and comparing design alternatives at a low cost
forming a foundation for implementation
capturing requirements precisely
communicating decisions unambiguously

4.2.1. Aiding understanding of complex systems

The importance of models increases as systems become more complex. For example, a doghouse
can be constructed without blueprints. However, as one progresses to houses, and the n to
skyscrapers, the need for blueprints becomes pronounced.

Similarly, a small application built by one person in a few days may be easily understood in its
entirety. However, an e-commerce system with tens of thousands of source lines of code (SLOCs)—
or an air traffic control system with hundreds of thousands of SLOCs—can no longer be easily
understood by one person. Constructing models allows a developer to focus on the big picture,
understand how components interact, and identify fatal flaws.

Some examples of models are:

Use Cases to unambiguously specify behavior

Compiled by Roshan Chitrakar 10 of 16


Object Oriented Analysis and Design Software Industries Best Practices

Class Diagrams and Data Model Diagrams to capture design


State Transition Diagrams to model dynamic behavior

Modeling is important because it helps the team visualize, construct, and document the structure and
behavior of the system, without getting lost in complexity.

4.2.2. Exploring and comparing design alternatives at a low cost

Simple models can be created and modified at a low cost to explore design alternatives. Innovative
ideas can be captured and reviewed by other developers before investing in costly code
development. When coupled with iterative development, visual modeling helps developers to assess
design changes and communicate these changes to the entire development team.

4.2.3. Forming a foundation for implementation

Today many projects employ object-oriented programming languages to obtain reusable, change-
tolerant, and stable systems. To obtain these benefits, it's even more important to use object
technology in design. The Rationa l Unified Process (RUP) produces an object-oriented design model
that is the basis for implementation.

With the support of appropriate tools, a design model can be used to generate an initial set of code
for implementation. This is referred to as "forward engineering" or "code generation". Design
models may also be enhanced to include enough information to build the system.

Reverse engineering may also be applied to generate design models from existing
implementations. This may be used to evaluate existing implementations.

"Round trip engineering" combines both forward and reverse engineering techniques to ensure
consistent design and code. Combined with an iterative process, and the right tools, round-trip
engineering allows design and code to be synchronized during each iteration.

4.2.4. Capturing requirements precisely

Before building a system, it's critical to capture the requirements. Specifying the requirements using
a precise and unambiguous model helps to ensure that all stakeholders can understand and agree on
the requirements.

A model that separates the external behavior of the system from the implementation helps you focus
on the intended use of the system, without getting bogged down in implementation details.

4.2.5. Communicating decisions unambiguously

The RUP uses the Unified Modeling Language (UML), a consistent notation that can be applied for
system engineering as well as business engineering. A standard notation serves the following roles :

"It serves as a language for communicating decisions that are not obvious or cannot be
inferred from the code itself."
"It provides semantics that are rich enough to capture all important strategic and tactical
decisions."
"It offers a form concrete enough for humans to reason and for tools to manipulate."

Compiled by Roshan Chitrakar 11 of 16


Object Oriented Analysis and Design Software Industries Best Practices

UML represents the convergence of the best practice in software modeling throughout the object-
technology industry.
5. Continuously Verify Quality

Software problems are 100 to 1000 times more costly to find and repair after deployment. Verifying and
managing quality throughout the project's lifecycle is essential to achieving the right objectives at the right time.

5.1. Meaning of Quality Verification Throughout the Lifecycle?


It's important that the quality of all artifacts are assessed at several points in the project's lifecycle as
they mature. Artifacts should be evaluated as the activities that produce them complete and at the
conclusion of each iteration. In particular, as executable software is produced, it should be subjected
to demonstration and test of important scenarios in each iteration, which provides a more tangible
understanding of design trade-offs and earlier elimination of architectural defects. This is in contrast
to a more traditional approach that leaves the testing of integrated software until late in the project's
lifecycle.

5.2. What is Quality?


5.2.1. Introduction

Quality is something we all strive for in our products, processes, and services. Yet when asked,
"What is Quality?", everyone has a different opinion. Common responses include one or the other of
these:

"Quality ... I'm not sure how to describe it, but I'll know it when I see it."
"... meeting requirements."

Perhaps the most frequent reference to quality, specifically related to software, is this remark
regarding its absence:

"How could they release something like this with such low quality!?"

These commonplace responses are telling, but they offer little room to rigorously examine quality
and improve upon its execution. These comments all illustrate the need to define quality in a manner
in which it can be measured and achieved.

Quality, however, is not a singular characteristic or attribute. It's multi-dimensional and can be
possessed by a product or a process. Product quality is concentrated on building the right product,
whereas process quality is focused on building the product correctly.

Compiled by Roshan Chitrakar 12 of 16


Object Oriented Analysis and Design Software Industries Best Practices

5.2.2. Definition of Quality

The definition of quality, taken from The American Heritage Dictionary of the English Language,
3rd Edition, Houghton Mifflin Co.,© 1992, 1996, is:

Quality (kwol'i-te) n., pl. -ties. Abbr. qlty. 1.a. An inherent or distinguishing characteristic; a
property. b. A personal trait, especially a character trait. 2. Essential character; nature. 3.a.
Superiority of kind. b. Degree or grade of excellence.

As demonstrated by this definition, quality is not a single dimension, but many. To use the definition
and apply it to software development, the definition must be refined. Therefore, for the purposes of
the Rational Unified Process (RUP), quality is defined as:

"...the characteristic of having demonstrated the achievement of producing a product that meets or
exceeds agreed-on requirements—as measured by agreed-on measures and criteria—and that is
produced by an agreed-on process."

Achieving quality is not simply "meeting requirements", or producing a product that meets user
needs and expectations. Rather, quality also includes identifying the measures and criteria to
demonstrate the achievement of quality, and the implementation of a process to ensure that the
product created by the process has achieved the desired degree of quality, and can be repeated and
managed.

5.2.3. Who Owns Quality?

A common misconception is that quality is owned by, or is the responsibility of, one group. This
myth is often perpetuated by creating a group, sometimes called Quality Assurance—other names
include Test, Quality Control, and Quality Engineering—and giving them the charter and the
responsibility for quality.

Quality is, and should be, the responsibility of everyone. Achieving quality must be integral to
almost all process activities, instead of a separate discipline, thereby making everyone responsible
for the quality of the products (or artifacts) they produce and for the implementation of the process in
which they are involved.

Each role contributes to the achievement of quality in the following ways:

Product quality—the contribution to the overall achievement of quality in each artifact being
produced.
Process quality—the achievement of quality in the process activities for which they are
involved.

Everyone shares in the responsibility and glory for achieving a high-quality product, or in the shame
of a low-quality product. But only those directly involved in a specific process component are
responsible for the glory, or shame, for the quality of those process components (and the artifacts).
Someone, however, must take the responsibility for managing quality; that is, providing the
supervision to ensure that quality is being managed, measured, and achieved. The role responsible
for managing quality is the Project Manager.

5.2.4. Common Misconceptions about Quality

There are many misconceptions regarding quality and the most common include:

Compiled by Roshan Chitrakar 13 of 16


Object Oriented Analysis and Design Software Industries Best Practices

Quality can be added to or "tested" into a product


Quality is a single dimension, attribute, or characteristic and means the same thing to
everyone
Quality happens on its own

5.2.4.1. Quality can be added to or "tested" into a product

Just as a product cannot be produced if there is no description of what it is, what it needs to do, who
uses it and how it's used, and so on, quality and its achievement cannot be attained if it's not
described, measured, and part of the process of creating the produc t.

5.2.4.2. Quality is a single dimension, attribute, or characteristic and means the same thing to
everyone

Quality is not a single dimension, attribute, or characteristic. Quality is measured in many ways—
quality metrics and criteria are established to meet the needs of project, organization, and customer.

Quality can be measured along several dimensions—some apply to process quality; some to product
quality; some to both. Quality can be measured for:

Progress—such as use cases demonstrated or milestones completed


Variance—differences between planned and actual schedules, budgets, staffing requirements,
and so forth
Reliability—resistance to failure (crashing, hanging, memory leaks, and so on) during
execution
Function—the artifact implements and executes the required use cases as intended
Performance—the artifact executes and responds in a timely and acceptable manner, and
continues to perform acceptably when subjected to real-world operational characteristics such
as load, stress, and lengthy periods of operation

5.2.4.3. Quality happens on its own

Quality cannot happen by itself. For quality to be achieved, a process is must be implemented,
adhered to, and measured. The purpose of the RUP is to provide a disciplined approach to assigning
tasks and responsibilities within a development organization. Our goal is to ensure the production of
high-quality software that meets the needs of our end users, within a predictable schedule and
budget. The RUP captures many of the best practices in modern software development in a form that
can be tailored for a wide range of projects and organizations. The Environment discipline gives you
guidance about how to best configure the process to your needs.

Processes can be configured and quality—criteria for acceptability—can be negotiated, based upon
several factors. The most common factors are:

Risk (including liability)


Market opportunities
Revenue requirements
Staffing or scheduling issues
Budgets

Changes in the process and criteria for acceptability should be identified and agreed upon at the
outset of the project.

Compiled by Roshan Chitrakar 14 of 16


Object Oriented Analysis and Design Software Industries Best Practices

5.3. Management of Quality in the RUP


Managing quality is done for these purposes:

To identify appropriate indicators (metrics) of acceptable quality


To identify appropriate measures to be used in evaluating and assessing quality
To identify and appropriately address issues affecting quality as early and effectively as
possible

Managing quality is implemented throughout all disciplines, workflows, phases, and iterations in the
RUP. In general, managing quality throughout the lifecycle means you implement, measure, and
assess both process quality and product quality. Some of the efforts expended to manage quality in
each discipline are highlighted in the following list:

Managing quality in the Requirements discipline includes analyzing the requirements artifact
set for consistency (between artifact standards and other artifacts), clarity (clearly
communicates information to all shareholders, stakeholders, and other roles), and precision
(the appropriate level of detail and accuracy).
In the Analysis & Design discipline, managing quality includes assessing the design artifact
set, including the consistency of the design model, its translation from the requirements
artifacts, and its translation into the implementation artifacts.
In the Implementation discipline, managing quality includes assessing the implementation
artifacts and evaluating the source code or executable artifacts against the appropriate
requirements, design, and test artifacts.
The Test discipline is highly focused toward managing quality, as most of the efforts
expended in this discipline address the three purposes of managing quality, identified
previously.
The Environment discipline, like the Test discipline, includes many efforts addressing the
purposes of managing quality. Here you can find guidance on how to best configure your
process to meet your needs.
Managing quality in the Deployment discipline includes assessing the implementation and
deployment artifacts, and evaluating the executable and deployment artifacts against the
appropriate requirements, design, and test artifacts needed to deliver the product to your
customer.
The Project Management discipline includes an overview of many efforts for managing
quality, including the reviews and audits required to assess the implementation, adherence,
and progress of the development process.

6. Manage Change

Compiled by Roshan Chitrakar 15 of 16


Object Oriented Analysis and Design Software Industries Best Practices

Managing change is more than just checking-in and checking-out files. It includes
management of workspaces, parallel development, integration, and builds.

A key challenge when you're developing software-intensive systems is that you must cope with
multiple developers, organized into different teams, possibly at different sites, working together on
multiple iterations, releases, products, and platforms. In the absence of disciplined control, the
development process rapidly degenerates into chaos. In the Rational Unified Process, the
Configuration & Change Management discipline describes how you meet this challenge.

6.1. Coordinating the Activities and Artifacts

Coordinating the activities and artifacts of developers and teams involves establishing repeatable
procedures for managing changes to software and other development artifacts. This coordination
allows a better allocation of resources based on the project's priorities and risks, and it actively
manages the work on those changes across iterations. Coupled with developing your software
iteratively, this practice lets you continuously monitor changes so that you can actively discover, and
then react to problems.

6.2. Coordinating Iterations and Releases


Coordinating iterations and releases involves establishing and releasing a tested baseline at the
comp letion of each iteration. Maintaining traceability among the elements of each release and among
elements across multiple, parallel releases is essential for assessing and actively managing the impact
of change.

6.3. Controlling Changes to Software


Controlling changes to software offers a number of solutions to the root causes of software
development problems:

The workflow of requirements change is defined and repeatable.


Change requests facilitate clear communications.
Isolated workspaces reduce interference among team members working in parallel.
Change rate statistics provide good metrics for objectively assessing project status.
Workspaces contain all artifacts, which facilitates consistency.
Change propagation is assessable and controlled.
Changes can be maintained in a robust, customizable system.

Sources:

• Rational Rose Enterprise Suite, Release Version 2002.05.20


• Grady Booch, Object-oriented analysis and design with applications
• http://www.rational.com

Compiled by Roshan Chitrakar 16 of 16

You might also like