You are on page 1of 16

February 2011

Master of Computer Application (MCA) – Semester 3


MC0071 – Software Engineering– 4 Credits
Assignment Set – 1

Ques. 1- Describe the concurrent development model in your own words.


Answer:-

Introduction

Over the last two decades, traditional top-down approaches to software engineering, such as the
Waterfall model, have gradually lost their dominance in favor of evolutionary and iterative methods. In
an evolutionary approach, such as the Unified Process, previously separate phases such as requirements
analysis, design and implementation are allowed to overlap, and the requirements themselves can change
or evolve while the system is being developed. In an iterative approach, such as extreme Programming,
the system is constructed through a series of versions, each of which delivers additional features, as
agreed at the start of each iteration. Iterative approaches naturally support and encourage the evolution of
requirements and design, since these typically change with each iteration; they are also therefore called
agile methods. These newer methods of software development were created in response to the failure, or
perceived failure, of top-down approaches to deliver the systems that meet or beat their specification,
deadline and budget. Indeed, the increasing use of the term “software development” rather than “software
engineering” is indicative of the move to evolutionary approaches.

The currently accepted approach in the formal methods community is top-down. Designers
should use a formal, mathematically-based notation such as B or Z to write a specification, which is then
refined, gradually adding detail until a level has been reached that can straightforwardly or automatically
be turned into program code. Associated with this process are proof obligations; these could be ignored,
checked informally, or else proved correct, as for example in the B method.

This top-down formal stepwise development method has not been widely adopted (except for
safety-critical systems where additional regulatory requirements exist). Part of the reason for this failure,
we speculate, is that programmers are forced to wait until designers have completed the specification. Of
course, for a small system, the programmers will also be the designers – but larger teams typically have
some specialization of development roles. Completing a specification can be a time-consuming activity,
since it requires designers to think through all special and error cases, and in fact these cases sometimes
come to light only after implementation has started. In addition, in many cases there is not the time, nor
the skills, required to discharge all proof obligations. Where proof is not being used, it is not immediately
clear that writing formal specifications and designs (formal models) offers any significant benefits over
less formal but more popular modeling notations such as the UML.

This paper considers two issues. Firstly how a formal model can be incorporated in an
evolutionary development, and secondly what benefits there may be in having a formal model in
situations where it is not necessary, or not desired, to carry out formal proofs. An example would be the
implementation of a business application, which is unlikely to be safety-critical, though it might be
mission-critical.
With regard to the first of these, in an evolutionary development, implementation work will begin
with a specification that is at most partially complete – which is why it is more helpful to use the phrase
formal model” rather than “formal specification”. During the development, both the implementation and
model will evolve. The question arises, how can these two evolving artifacts be kept consistent? Indeed,
what does consistency mean in this context?

Benefits of Formal Models

This section considers potential benefits that can be obtained from having a formal, mathematical
model even in cases where it is not intended to prove (formally or informally) that the implementation
conforms to the model. One of the claimed benefits of formal methods is the extra insight gained from
giving a precise definition of system behavior. Similar claims are also made with respect to producing a
diagrammatic model using a notation such as the UML. Insight is hard to quantify, however, so it is
difficult to measure these benefits, or to compare formal and diagrammatic approaches. The clearest
benefits of having a formal model are those that follow from the availability of tool support, as described
below. The U2B tool, for example, can generate formal models from a diagrammatic UML model plus
annotations, allowing the benefits of each approach to be combined.

Animation

Animation or visualization of a model allows the developer to step through an execution


sequence of the model from state to state. It requires the model to be written in a form that can be
“executed”, though the notion of execution here is not limited to imperative programming. The ProB
tool [7], for example, uses constraint logic programming to determine the set of states that can follow the
current one in a B model even if the operations have non-deterministic definitions. Probe supports a point
and click user interface which provides a lightweight way of checking that a model defines the required
behavior. States can be displayed graphically or in textual form. Animation can be used to communicate
the intention and consequences of a model in a more dynamic way than reading its mathematical
definition. (Concrete examples help us to understand abstract definitions.) Clearly, animation can only
be used to explore a limited number of paths through state transition graph, but it can help to eliminate
simple errors before a more systematic, but more expensive, error removal process is applied. It can help
to avoid misunderstandings that might occur with requirements expressed solely in natural language. By
comparison, animation facilities for UML models seem relatively limited. (With executable UML [17], a
platform dependent model can be produced and executed. This is not, as yet, a mainstream approach to
using the UML.)

Model-Checking

Model-checking tools support exhaustive investigation of a system’s state space, which typically
this has to be finite. They can be used to detect states that do not satisfy an invariant, or to find execution
paths that do not satisfy properties expressed as a temporal logic formula. Note that the emphasis, as with
testing, is on finding errors, not proving their absence. This is because the model being checked is
usually a simplified version of the real system: absence of errors in the model does not guarantee the
system itself is correct. Conversely, errors found in the model usually indicate problems in the system
itself, though not always, because the model may be over-simplified or simply wrong. It is possible to
create a model, or series of models, to explore the consequences of design decisions, for example whether
to use synchronous or asynchronous message-passing. The model-checker can help the designer to
determine what properties are and are not satisfied. This technique is valuable in technology exploration
and risk-management, so it has been called “risk-driven” model-checking [3]; another appropriate name is
“throwaway modelling” as there is no obligation to keep the model once it has served its purpose.

Assertions

Formal models, particularly those based on pre- and post-conditions, are naturally associated
with the use of assertions in code. The AsmL [12] toolkit, for example, can automatically insert
assertions at appropriate points in the code to check that each operation conforms to its specification.
This approach is most helpful in debugging and error location, as assertion checking is usually disabled in
production software.

Model-Based Testing

It is possible to use a formal model to assist in testing. Classic techniques such as equivalence
partitioning and boundary testing can be applied to the definition of operations given in the model.
Mathematical formulas are converted into disjunctive normal form to discover cases of interest, and
constraint satisfaction to determine values that witness each such cases. This approach has been an area
of active research for a number of years [10], and is now starting to deliver tools of industrial relevance
[15]. Clearly, properties postulated during model-checking could be used to generate test cases, either
systematically or automatically. It is worth noting, however, that writing good test cases is essentially a
creative activity. Automated test case generation can at most supplement, not replace, the use of skilled
manual testing.

Another model-based approach to software testing involves the use of execution traces or event
logs. It is common for production software to support event logging or tracing of some kind. Typically,
traces can be rather large and it is appropriate to automatic analysis. It is possible to write special purpose
trace analysis tools, for example using Perl or a similar scripting language. Model-based trace-checking
[14] uses instead a model checker as the analysis tool; a trace and a model are supplied and the model-
checker determines whether or not the given trace conforms to the supplied model. An alternative name
for the technique is trace-driven model checking. ProB provides this facility directly. With a little
ingenuity, it is possible to use other popular model checkers such as Spin [13] in this fashion as well.
Trace-checking has the advantage relative to assertion checking that it can be run off-line and after the
fact. Global invariants can also be checked more easily. Trace checking only supplements other testing
techniques – there has to be a test-case or user driving the system in order for there to be a trace to check.
As with animation and model-checking, trace-checking is only applicable where the model is
“executable”, albeit that the requirement is weaker again, since here the input, output, pre- and post-
operation values are all available – it is only necessary to test these, not to generate them.

2. Explain the following concepts with respect to Software Reliability:


A) Software Reliability Metrics B) Programming for Reliability

Answer:-
A) Software Reliability Metrics

 Introduction

Computers and computer systems have become a significant part of our modern society.  It
is virtually impossible to conduct many day-to-day activities without the aid of computer systems
controlled by software.  As more reliance is placed on these software systems it is essential that
they operate in a reliable manner.  Failure to do so can result in high monetary, property or human
loss.

The NASA Software Assurance Standard, NASA-STD-8739.8, defines software reliability


as a discipline of software assurance that:

1. Defines the requirements for software controlled system fault/failure detection,


isolation, and recovery;
2. Reviews the software development processes and products for software error
prevention and/or reduced functionality states; and,
3. Defines the process for measuring and analyzing defects and defines/derives the
reliability and maintainability factors.

This section will provide software practitioners with a basic overview of software
reliability, as well as tools and resources on software reliability.

Software Reliability Techniques

Reliability techniques can be divided into two categories:  Trending and Predictive

 Trending reliability tracks the failure data produced by the software system to


develop a reliability operational profile of the system over a specified time.

 Predictive reliability assigns probabilities to the operational profile of a software


system; for example, the system has a 5 percent chance of failure over the next 60
operational hours.

In practice, reliability trending is more appropriate for software, whereas predictive


reliability is more suitable for hardware.  Trending reliability can be further classified into
four categories:  Error Seeding, Failure Rate, Curve Fitting, and Reliability Growth

 Error Seeding:  Estimates the number of errors in a program by using multistage


sampling.  Errors are divided into indigenous and induced (seeded) errors.  The
unknown number of indigenous errors is estimated from the number of induced
errors and the ratio of errors obtained from debugging data.

 Failure Rate:  Is used to study the program failure rate per fault at the failure
intervals.  As the number of remaining faults change, the failure rate of the program
changes accordingly.
 Curve Fitting:  Uses statistical regression analysis to study the relationship
between software complexity and the number of faults in a program, as well as the
number of changes, or failure rate.

 Reliability Growth:  Measures and predicts the improvement of reliability


programs through the testing process.  Reliability growth also represents the
reliability or failure rate of a system as a function of time or the number of test
cases.

Software Reliability Metrics

Listed below are some static code and dynamic metrics which are related to software


reliability.  The static code metric is divided into three categories with measurements under
each:  Line Count, Complexity and Structure, and Object-Oriented metrics.  The dynamic metric
has two major measurements:  Failure Rate Data and Problem Reports.

Static Code Metrics

 Line Count:
Lines of Code (LOC)
o
Source Lines of Code (SLOC)
o
 Complexity and Structure:
o Cyclamate Complexity (CC)
o Number of Modules
o Number of Go To Statements (GOTO)
 Object-Oriented:
o Number of Classes
o Weighted Methods per Class (WMC)
o Coupling Between Objects (CBO)
o Response for a Class (RFC)
o Number of Child Classes (NOC)
o Depth of Inheritance Tree (DIT)
Dynamic Metrics
 Failure Rate Data
 Problem Reports

Basic Mathematical Concepts

Mathematically reliability R(t) is the probability that a system will be successful in the


interval from time 0 to time t:

R(t) = P(T > t),  t => 0

where T is a random variable denoting the time-to-failure or failure time.

Unreliability F(t), a measure of failure, is defined as the probability that the system will fail by
time t.
F(t) = P(T =< t),  t => 0.

In other words, F(t) is the failure distribution function.  The following relationship applies to


reliability in general.  The Reliability R(t), is related to failure probability F(t) by:

R(t) = 1 - F(t).

B) Programming for Reliability

Modern static analyzers have been used to find hundreds of bugs in the Linux kernel and many
large commercial applications without the false alarm rate seen in previous generation tools such as lint.
Static analyzers find bugs in source code without the need for execution or test cases. They parse the
source code and then perform dataflow analysis to find potential error cases on all possible paths through
each function. No technique is a silver bullet, and static analysis is no exception. Static analysis is usually
only practical for certain categories of errors and cannot replace functional testing. However, compared
with traditional QA and testing techniques, static analysis provides a way to achieve much higher
coverage of corner cases in the code. Compared with manual code review, static analysis is impartial,
thorough, and cheap to perform regularly.

Inconsistent Assumptions
Many types of bugs are caused by making inconsistent assumptions. Consider Listing One from line 188
of linux-2.6.4/drivers/scsi/aic7xxx/aic7770_osm.c. This seems like a strange error report. Why
wouldahc_alloc() free the argument name? Looking at the implementation, you see Listing Two. Clearly
if this function returns NULL, then the argument name is supposed to be freed, but the caller either didn't
know or forgot this facet of the interface. Why wasn't this found in testing? Probably because this is code
that runs when a device driver is initialized, and this is usually at boot time when malloc is unlikely to
fail. Unfortunately, this is not always the case, and it is possible that a device would be initialized later
(for example, with devices that can be hot-plugged).

Inconsistent assumptions are relatively easy to avoid when initially designing and writing code,
but they are difficult to avoid when maintaining code. Most code changes are usually localized to single
functions or files, but assumptions made about that code may be scattered throughout the code base.

It is especially common to see inconsistent assumptions related to conditions at the exit of a loop,
the join points at the end of if statements, the entry and exit points for functions, and error-handling code.
Static analyzers have an advantage over people in that they are not fazed by complex control flow, they
look for any path that has a contradiction of assumptions, and they have immediate access to information
across multiple source files and functions. Human auditors can be easily confused by irrelevant details of
the code. For example, inListing Three, an array-bounds error from a Linux MIDI device driver-
outEndpoint is incremented in a loop that is looking for a particular condition to hold. After the loop, the
code assumes that the condition held at some point in the loop. But this assumption might be false;
otherwise, the first part of the loop condition,outEndpoint < 15, is a redundant test. That is, if that test can
ever be false (why else would it be there?) then outEndPointmight be equal to 15 at the end of the loop,
which meansmouts[outEndpoint] will be accessing data past the end of the array allocated on the stack.
Error-Handling Code

Developers typically program with the primary goal of implementing core functionality, while handling
exceptional cases is often seen as a distraction. Furthermore, designing test cases that exercise error paths
is difficult because it requires artificially inducing errors. Despite the difficulty of handling errors, it is
critical to do. Unfortunately, debugging code often passes as error-handling code, such as in Listing
Four from a Linux SCSI device driver. The problem in this case is that the call to printk does not
terminate execution or signal an error to the caller. Either the error check on line 918 is unnecessary, or
the accesses on 920-922 will be out of bounds, leading to memory corruption. There are plenty of cases
where half-hearted error checking potentially leads to serious errors. Code auditors should determine
whether or not error checking is needed, and that it is done properly when required.

Even when error-handling code is in place, it is often riddled with bugs because it is difficult to make test
cases that trigger error conditions. In Listing Five, memory is being leaked inside the Linux code that
deals with IGMP, which is a protocol used for IP multicast. The function skb_alloc is a memory
allocation function that returns a buffer. In line 281, you see that the memory leak occurs when the
allocation succeeds (skb == NULL is false). In line 289, the function ip_route_output_key() is called. It
doesn't matter what this function does. What matters is that it can't free or save a pointer
to skb because skb is not passed to it. In the case where the function returns false, skb is still allocated. In
line 294, another condition is tested; if it is true, the function returns on line 296 without freeing the
memory allocated in line 280. If this code can be triggered by an external network packet, it becomes a
denial of service or remote crash attack on machines with IGMP enabled.

The error just described probably occurred because the programmer simply forgot to free the memory.
The widely used practice of having an error exit that performs cleanup would have prevented this error.
Alternatively, arenas or pools could have prevented this error by keeping memory associated with a
particular task together, then freeing all of the memory allocated by the pool once the task is finished. In
C++, the commonly used Resource Acquisition Is Initialization (RAII) idiom could prevent this sort of
error by automating the cleanup task in the destructors of scoped objects. However, take care even when
using RAII because resources that must survive past the end of the scope must have their cleanup actions
prevented. A common way of doing this is to provide acommit() method for the RAII object, but this
presents the potential for an error to occur if a commit() call is forgotten. In short, be careful because it is
ultimately up to you to decide how long objects should survive.
Ques. 3:- Suggest six reasons why software reliability is important. Using an example,
explain the difficulties of describing what software reliability means.

Answer:

The need for a means to objectively determine software reliability comes from the desire to apply the
techniques of contemporary engineering fields to the development of software. That desire is a result of
the common observation, by both lay-persons and specialists, that computer software does not work the
way it ought to. In other words, software is seen to exhibit undesirable behaviour, up to and including
outright failure, with consequences for the data which is processed, the machinery on which the software
runs, and by extension the people and materials which those machines might negatively affect. The more
critical the application of the software to economic and production processes, or to life-sustaining
systems, the more important is the need to assess the software's reliability.

Regardless of the criticality of any single software application, it is also more and more frequently
observed that software has penetrated deeply into most every aspect of modern life through the
technology we use. It is only expected that this infiltration will continue, along with an accompanying
dependency on the software by the systems which maintain our society. As software becomes more and
more crucial to the operation of the systems on which we depend, the argument goes, it only follows that
the software should offer a concomitant level of dependability. In other words, the software should
behave in the way it is intended, or even better, in the way it should.

A software quality factor is a non-functional requirement for a software program which is not called up
by the customer's contract, but nevertheless is a desirable requirement which enhances the quality of the
software program. Note that none of these factors are binary; that is, they are not “either you have it or
you don’t” traits. Rather, they are characteristics that one seeks to maximize in one’s software to optimize
its quality. So rather than asking whether a software product “has” factor x, ask instead the degree to
which it does (or does not).

Some software quality factors are listed here:


Understandability
Clarity of purpose. This goes further than just a statement of purpose; all of the design and user
documentation must be clearly written so that it is easily understandable. This is obviously subjective in
that the user context must be taken into account: for instance, if the software product is to be used by
software engineers it is not required to be understandable to the layman.

Completeness
Presence of all constituent parts, with each part fully developed. This means that if the code calls
a subroutine from an external library, the software package must provide reference to that library and all
required parameters must be passed. All required input data must also be available.
Conciseness
Minimization of excessive or redundant information or processing. This is important where
memory capacity is limited, and it is generally considered good practice to keep lines of code to a
minimum. It can be improved by replacing repeated functionality by one subroutine or function which
achieves that functionality. It also applies to documents.
Portability
Ability to be run well and easily on multiple computer configurations. Portability can mean both
between different hardware—such as running on a PC as well as a smartphone—and between different
operating systems—such as running on both Mac OS X and GNU/Linux.
Consistency
Uniformity in notation, symbology, appearance, and terminology within itself.
Maintainability
Propensity to facilitate updates to satisfy new requirements. Thus the software product that is
maintainable should be well-documented, should not be complex, and should have spare capacity for
memory, storage and processor utilization and other resources.
Testability
Disposition to support acceptance criteria and evaluation of performance. Such a characteristic
must be built-in during the design phase if the product is to be easily testable; a complex design leads to
poor testability.
Usability
Convenience and practicality of use. This is affected by such things as the human-computer
interface. The component of the software that has most impact on this is the user interface (UI), which for
best usability is usually graphical (i.e. a GUI).
Reliability
Ability to be expected to perform its intended functions satisfactorily. This implies a time factor
in that a reliable product is expected to perform correctly over a period of time. It also encompasses
environmental considerations in that the product is required to perform correctly in whatever conditions it
finds itself (sometimes termed robustness).
Efficiency
Fulfillment of purpose without waste of resources, such as memory, space and processor
utilization, network bandwidth, time, etc.

Security
Ability to protect data against unauthorized access and to withstand malicious or inadvertent
interference with its operations. Besides the presence of appropriate security mechanisms such as
authentication, access control and encryption, security also implies resilience in the face of malicious,
intelligent and adaptive attackers.
Time Example
There are two major differences between hardware and software curves. One difference is that in the last
phase, software does not have an increasing failure rate as hardware does. In this phase, software is
approaching obsolescence; there are no motivation for any upgrades or changes to the software.
Therefore, the failure rate will not change. The second difference is that in the useful-life phase, software
will experience a drastic increase in failure rate each time an upgrade is made. The failure rate levels off
gradually, partly because of the defects found and fixed after the upgrades.
Ques. 4- What are the essential skills and traits necessary for effective project managers in
successfully handling projects?

Answer:-

The Successful Project Manager


A successful project manager knows how to bring together the definition and control elements
and operate them efficiently. That means you will need to apply the leadership skills you already apply in
running a department and practice the organizational abilities you need to constantly look to the future.
In other words, if you’re a qualified department manager, you already possess the skills and attributes for
succeeding as a project manager. The criteria by which you will be selected will be similar.
Chances are, the project you’re assigned will have a direct relationship to the skills you need just to do
your job. For example:
 Organizational and leadership experience. An executive seeking a qualified project manager usually
seeks someone who has already demonstrated the ability to organize work and to lead others. He or she
assumes that you will succeed in a complicated long-term project primarily because you have already
demonstrated the required skills and experience.
 Contact with needed resources. For projects that involve a lot of coordination between departments,
divisions, or subsidiaries, top management will look for a project manager who already communicates
outside of a single department. If you have the contacts required for a project, it will naturally be assumed
that you are suited to run a project across departmental lines.
 Ability to coordinate a diverse resource pool. By itself, contact outside of your department may not be
enough. You must also be able to work with a variety of people and departments, even when their
backgrounds and disciplines are dissimilar. For example, as a capable project manager, you must be able
to delegate and monitor work not only in areas familiar to your own department but in areas that are alien
to your background.
 Communication and procedural skills. An effective project manager will be able to convey and
receive information to and from a number of team members, even when particular points of view are
different from his own. For example, a strictly administrative manager should understand the priorities of
a sales department, or a customer service manager may need to understand what motivates a production
crew.
 Ability to delegate and monitor work. Project managers need to delegate the work that will be
performed by each team member, and to monitor that work to stay on schedule and within budget. A
contractor who builds a house has to understand the processes involved for work done by each
subcontractor, even if the work is highly specialized. The same is true for every project manager. It’s not
enough merely to assign someone else a task, complete with a schedule and a budget. Delegation and
monitoring are effective only if you’re also able to supervise and assess progress.
 Dependability. Your dependability can be tested only in one way: by being given responsibility and
the chance to come through. Once you gain the reputation as a manager who can and does respond as
expected, you’re ready to take on a project.
These project management qualifications read like a list of evaluation points for every department
manager. If you think of the process of running your department as a project of its own, then you already
understand what it’s like to organize a project—the difference, of course, being that the project takes
place in a finite time period, whereas your departmental tasks are ongoing. Thus, every successful
manager should be ready to tackle a project, provided it is related to his or her skills, resources, and
experience.
Ques. 5:- Which are the four phases of development according to Rational Unified
Process?
Answer:-
The Rational Unified Process® is a Software Engineering Process. It provides a disciplined
approach to assigning tasks and responsibilities within a development organization. Its goal is to ensure
the production of high-quality software that meets the needs of its end-users, within a predictable
schedule and budget.
The Rational Unified Process is a process product, developed and maintained by Rational®
Software. The development team for the Rational Unified Process are working closely with customers,
partners, Rationale’s product groups as well as Rationales consultant organization, to ensure that the
process is continuously updated and improved upon to reflect recent experiences and evolving and proven
best practices. The Rational Unified Process enhances team productivity, by providing every team
member with easy access to a knowledge base with guidelines, templates and tool mentors for all critical
development activities. By having all team members accessing the same knowledge base, no matter if you
work with requirements, design, test, project management, or configuration management, we ensure that
all team members share a common language, process and view of how to develop software.
The Rational Unified Process activities create and maintain models. Rather than focusing on the
production of large amount of paper documents, the Unified Process emphasizes the development and
maintenance of models—semantically rich representations of the software system under development.
The Rational Unified Process is a guide for how to effectively use the Unified Modeling
Language (UML). The UML is an industry-standard language that allows us to clearly communicate
requirements, architectures and designs. The UML was originally created by Rational Software, and is
now maintained by the standards organization Object Management Group (OMG).
Effective Deployment of 6 Best Practices
The Rational Unified Process describes how to effectively deploy commercially proven
approaches to software development for software development teams. These are called “best practices”
not so much because you can precisely quantify their value, but rather, because they are observed to be
commonly used in industry by successful organizations. The Rational Unified Process provides each team
member with the guidelines, templates and tool mentors necessary for the entire team to take full
advantage of among others the following best practices:
1. Develop software iteratively
2. Manage requirements
3. Use component-based architectures Rational Unified Process: Best Practices for Software
development Teams
4. Visually model software
5. Verify software quality
6. Control changes to software
Develop Software Iteratively
Given today’s sophisticated software systems, it is not possible to sequentially first define the
entire problem, design the entire solution, build the software and then test the product at the end. An
iterative approach is required that allows an increasing understanding of the problem through successive
refinements, and to incrementally grow an effective solution over multiple iterations. The Rational
Unified Process supports an iterative approach to development that addresses the highest risk items at
every stage in the lifecycle, significantly reducing a project’s risk profile. This iterative approach helps
you attack risk through demonstrable progress frequent, executable releases that enable continuous end
user involvement and feedback. Because each iteration ends with an executable release, the development
team stays focused on producing results, and frequent status checks help ensure that the project stays on
schedule. An iterative approach also makes it easier to accommodate tactical changes in requirements,
features or schedule.
Manage Requirements
The Rational Unified Process describes how to elicit, organize, and document required
functionality and constraints; track and document tradeoffs and decisions; and easily capture and
communicate business requirements. The notions of use case and scenarios proscribed in the process has
proven to be an excellent way to capture functional requirements and to ensure that these drive the design,
implementation and testing of software, making it more likely that the final system fulfills the end user
needs. They provide coherent and traceable threads through both the development and the delivered
system.
Use Component-based Architectures
The process focuses on early development and baselining of a robust executable architecture,
prior to committing resources for full-scale development. It describes how to design a resilient
architecture that is flexible, accommodates change, is intuitively understandable, and promotes more
effective software reuse. The Rational Unified Process supports component-based software development.
Components are non-trivial modules, subsystems that fulfill a clear function. The Rational Unified
Process provides a systematic approach to defining an architecture using new and existing components.
These are assembled in a well-defined architecture, either ad hoc, or in a component infrastructure such as
the Internet, CORBA, and COM, for which an industry of reusable components is emerging.
Visually Model Software
The process shows you how to visually model software to capture the structure and behavior of
architectures and components. This allows you to hide the details and write code using “graphical
building blocks.” Visual abstractions help you communicate different aspects of your software; see how
the elements of the system fit together; make sure that the building blocks are consistent with your code;
maintain consistency between a design and its implementation; and promote unambiguous
communication. The industrystandard Unified Modeling Language (UML), created by Rational Software,
is the foundation for successful visual modeling.
Verify Software Quality
Poor application performance and poor reliability are common factors which dramatically inhibit
the acceptability of today’s software applications. Hence, quality should be reviewed with respect to the
requirements based on reliability, functionality, application performance and system performance. The
Rational Unified Process assists you in the planning, design, implementation, execution, and evaluation of
these test types. Quality assessment is built into the process, in all activities, involving all participants,
using objective measurements and criteria, and not treated as an afterthought or a separate activity
performed by a separate group.
Control Changes to Software
The ability to manage change is making certain that each change is acceptable, and being able to
track changes is essential in an environment in which change is inevitable. The process describes how to
control, track and monitor changes to enable successful iterative development. It also guides you in how
to establish secure workspaces for each developer by providing isolation from changes made in other
workspaces and by controlling changes of all software artifacts (e.g., models, code, documents, etc.). And
it brings a team together to work as a single unit by describing how to automate integration and build
management.
The Rational Unified Process product consists of:
• A web-enabled searchable knowledge base providing all team members with guidelines, templates,
and tool mentors for all critical development activities. The knowledge base can further be broken down
to:
• Extensive guidelines for all team members, and all portions of the software lifecycle. Guidance
is provided for both the high-level thought process, as well as for the more tedious day-to-day activities.
The guidance is published in HTML form for easy platform-independent access on your desktop.
• Tool mentors providing hands-on guidance for tools covering the full lifecycle. The tool
mentors are published in HTML form for easy platform-independent access on your desktop. See section
"Integration with Tools" for more details.
• Rational Rose ® examples and templates providing guidance for how to structure the
information in Rational Rose when following the Rational Unified Process (Rational Rose is Rational's
tool for visual modeling)
• SoDA ® templates more than 10 SoDA templates that helps automate software documentation
(SoDA is Rationale’s Document Automation Tool)
• Microsoft ® Word templates more than 30 Word templates assisting documentation in all
workflows and all portions of the lifecycle
• Microsoft Project Plans
Many managers find it difficult to create project plans that reflects an iterative development
approach. Our templates jump start the creation of project plans for iterative development, according to
the Rational Unified Process.
• Development Kit
Describes how to customize and extend the Rational Unified Process to the specific needs of the
adopting organization or project, as well as provides tools and templates to assist the effort. This
development kit is described in more detail later in this section.
• Access to Resource Center containing the latest white papers, updates, hints, and techniques, as well
as references to add-on products and services.
• A book "Rational Unified Process — An Introduction", by Philippe Kruchten, published by Addison
Wesley. The book is on 277 pages and provides a good introduction and overview to the process and the
knowledge base.
Ques. 6:- Describe the Capability Maturity Model with suitable real time examples.
Answer:-
The Capability Maturity Model (CMM)) is a multistaged, process definition model intended to
characterize and guide the engineering excellence or maturity of an organization’s software development
processes. The Capability Maturity Model: Guidelines for Improving the Software Process (1995)
contains an authoritative description. See also Paulk et al. (1993) and Curtis, Hefley, and Miller (1995)
and, for general remarks on continuous process improvement, Somerville, Sawyer, and Viller (1999) (see
Table 3.2). The model prescribes practices for “planning, engineering, and managing software
development and maintenance” and addresses the usual goals of organizational system engineering
processes: namely, “quality improvement, risk reduction, cost reduction, predictable process, and
statistical quality control” (Oshana& Linger 1999).
However, the model is not merely a program for how to develop software in a professional,
engineering-based manner; it prescribes an “evolutionary improvement path from an ad hoc, immature
process to a mature, disciplined process” (Oshana& Linger 1999). Walnau, Hissam, and Seacord (2002)
observe that the ISO and CMM process standards “established the context for improving the practice of
software develop- meant” by identifying roles and behaviors that define a software factory.
The CMM identifies five levels of software development maturity in an organization:
· At level 1, the organization’s software development follows no formal development process.
· The process maturity is said to be at level 2 if software management controls have been introduced and
some software process is followed. A decisive feature of this level is that the organization’s process is
supposed to be such that it can repeat the level of performance that it achieved on similar successful past
projects. This is related to a central purpose of the CMM: namely, to improve the predictability of the
development process significantly. The major technical requirement at level 2 is incorporation of
configuration management into the process.

Configuration management(or change management, as it is sometimes called) refers to the processes


used to keep track of the changes made to the development product (including all the intermediate
deliverables) and the multifarious impacts of these changes. These impacts range from the recognition of
development problems; identification of the need for changes; alteration of previous work; verification
that agreed upon modifications have corrected the problem and that corrections have not had a negative
impact on other parts of the system; etc.
An organization is said to be at level 3 if the development process is standard and consistent. The
project management practices of the organization are supposed to have been formally agreed on,defined,
and codified at this stage of process maturity.
· Organizations at level 4 are presumed to have put into place qualitative and quantitative
measures of organizational process. These process metrics are intended to monitor development and to
signal trouble and indicate where and how a development is going wrong when problems occur.
· Organizations at maturity level 5 are assumed to have established mechanisms designed to
ensure continuous process improvement and optimization. The metric feedbacks at this stage are not just
applied to recognize and control problems with the current project as they were in level-4 organizations.
They are intended to identify possible root causes in the process that have allowed the problems to occur
and to guide the evolution of the process so as to prevent the recurrence of such problems in future
projects, such as through the introduction of appropriate new technologies and tools.
The higher the CMM maturity level is, the more disciplined, stable, and well-defined the
development process is expected to be and the environment is assumed to make more use of “automated
tools and the experience gained from many past successes” (Zhiying 2003). The staged character of the
model lets organizations progress up the maturity ladder by setting process targets for the organization.
Each advance reflects a further degree of stabilization of an organization’s development process, with
each level “institutionaliz[ing] a different aspect” of the process (Oshana& Linger 1999).

Each CMM level has associated key process areas (KPA) that correspond to activities that must
be formalized to attain that level. For example, the KPAs at level 2 include configuration management,
quality assurance, project planning and tracking, and effective management of subcontracted software.
The KPAs at level 3 include intergroup communication, training, process definition, product engineering,
and integrated software management. Quantitative process management and development quality define
the required KPAs at level 4. Level 5 institutionalizes process and technology change management and
optimizes defect prevention.
Bamberger (1997), one of the authors of the Capability Maturity Model, addresses what she
believes are some misconceptions about the model. For example, she observes that the motivation for the
second level, in which the organization must have a “repeatable software process,” arises as a direct
response to the historical experience of developers when their software development is “out of control”
(Bamberger 1997). Often this is for reasons having to do with configuration management – or
mismanagement! Among the many symptoms of configuration mismanagement are: confusion over
which version of a file is the current official one; inadvertent side effects when repairs by one developer
obliterate the changes of another developer; inconsistencies among the efforts of different developers; etc.
A key appropriate response to such actual or potential disorder is to get control of the product
and the “product pieces under development” (configuration management) by (Bamberger 1997):
· Controlling the feature set of the product so that the “impact/s of changes are more fully understood”
(requirements management)
· Using the feature set to estimate the budget and schedule while “leveraging as much past knowledge as
possible” (project planning)
· Ensuring schedules and plans are visible to all the stakeholders (project tracking)
· Ensuring that the team follows its own plan and standards and “corrects discrepancies when they occur”
(quality assurance)
· Bamberger contends that this kind of process establishes the “basic stability and visibility” that are the
essence of the CMM repeatable level.

You might also like