You are on page 1of 10

2010

SOFTWARE ENG CONCEPT


AND TOOL

ASSIGNMENT 3

SUBMITTED TO:

MONA LISA MAM

SUBMITTED BY :
ANJANI KUNWAR
RA1803A10
10807973
B.TECH(CSE)-H
Q1. Consider a program containing many modules. If a global variable x
must be used to share data between two modules A and B, How would
you design the modules to minimize coupling ?

ANS:

Here are some methods from begning to end

Varieties of Module Coupling

Different kinds, from worst to best...

Content coupling (Bad, avoid it!)

Common coupling (Pretty bad!)

Control coupling (Not great, not terrible)

Stamp coupling (Pretty good!)

Data coupling (Good! Seek it!)

Varieties of Module Coupling - Some Details

Different kinds, from worst to best...

Content coupling (Bad! Avoid it!)

One module refers directly to part of another

Example: A module changes a statement in another module

Example:A module refers to data by offset in another module It's


easy to content couple modules in assembly language This is one
reason why high level languages are better (e.g. programmers do
more with less effort)

Why is content coupling so bad?


- Maintenance on the called module may require maintenance on
the calling module too

- Can't reuse calling module without including the called one "they
are inextricably linked" - Schach .They can hardly be considered
separate modules at all!

Common coupling (bad, but not quite as bad!)

Both modules share same data

Problems: Hard for programmers to see how they interact .Changing


one module likely requires changing other(s)

Example: the trafficLight classes .If color is public - common


coupling .If color is private - better .Not very reusable -- module is
tied into overall program .

Example: Replacing an automobile engine .

Has security problems: seeing unauthorized data

A malicious module could see/do destructive things to the data

Let's break it down into subcategories

1. Global common coupling

- common coupling using global data

2. Non-global common coupling

- local data is accessible from outside

A. Read common coupling

- read-only common coupling

B. Write common coupling

- common coupling with read+write


Control coupling (not great, but not terrible)

One module controls decisions made by other module

Example: a control flag is passed from one module to another

- Remember the module that takes an integer arg and prints the
corresponding error message?

Example: cout << hex; in C++

Problem: one module must know too much about the other (i.e. its
internal logic and structure)

Hard to reuse such modules

Example: a module that calls the error printer with a number

Stamp coupling (good, but not the best)

Modules pass data structures but only use parts of them

Problem: harder to understand what the modules do

...because the unused parts confuse

Problem: easy for modules to accidentally mess up data

Problem: malicious data access/modification

Example: passing a pointer in C

Everything the pointer points to is now accessible!

Data coupling (The best! Use it!)

Modules communicate only data which is used

Considered the best kind of coupling

...makes maintenance easier


Q2. Construct a table showing the advantages and disadvantages of
different structural models ?

ANS:
Model Type Strengths Weaknesses

Repository Efficient way to share Subsystems must


large amounts of data compromise between
specific needs of tools

Client-Server Distributed design, Changes to clients may be


easy to add and necessary to fully utilize
integrate new servers new servers
to the system

Layered Supports incremental Difficult to structure,


development of subsystems may have to
systems, very circumvent model to be
changeable and effective
portable

Q3. Why an object oriented approach to s/w development may not be


sufficient for real time systems ?

ANS:

The problems that arise when requirements for a software system are poorly
understood are widely recognized and well documented in a number of books and
articles .Still, the problems persist, and, in my estimation, will never totally
disappear. Here are a few reasons for this.

• Pushing the Technology Envelope. When systems are expanding into areas
of new technology, the "requirements" are frequently being discovered as
development proceeds.
• Building Product Platforms. When it is the objective of a project to develop
a platform to support a family of products over an extended period of time,
seldom are all the members of this family defined yet. Many are little more
than a glimmer in some marketing VP's eye. In situations like this,
requirements are never well understood at the beginning, nor do they remain
static over time.
Schedules

Producing relevant schedules for software development is even more problematic


than doing the development itself. This is especially true for the initial analysis
phase when we don't know what we don't know yet. Add to that a learning curve of
uncertain slope for a new analysis method and it truly becomes a Herculean task.
No analysis method, it would seem can offer the clairvoyant assistance needed
here. What we can ask of a good method, however, is to provide relevant feedback
for adjusting and calibrating the schedule. Suggestions for how to do this with
OOA are described in the next section.

Size/Complexity

Of issue here is not that systems are large or complex, but that their true size and
complexity are often poorly understood. As long as this is the case, schedule
projections suffer and a mild sense of being out of control persists. It is in the
analysis phase that the size/complexity question must be answered, and the
analysis method should have two important characteristics to deal with this. First is
the ability to scale up for large and complex problems if necessary. Second is the
ability to factor and reduce the size and complexity to their smallest possible
dimensions. The Domain Chart of OOA is a valuable tool for doing this and its use
is described in the next section.

Engineering Skills

Successful software engineering requires a number of very different technical


skills. These include the ability to do analysis, system design, program design,
coding, integration, and testing. Very few individuals are equally talented in all of
these areas.

My informal sampling of software engineers produces a distribution that peaks in


the middle with program design and coding, and falls off in both directions with
good analysts and testers being the most rare. This creates two challenges during
the analysis phase. One, good analysts must be identified and assigned judiciously.
Two, productive work must be found for members with other skills. OOA offers
some help in this area by having different roles within the analysis phase.

The design approach associated with OOA (Recursive Design [6]) also offers
opportunities to begin some design and implementation tasks early in some areas,
often while analysis is still proceeding in other areas.
Silver Bullet Syndrome

Perhaps the most insidious real-world problem when introducing new software
methods is the over zealous and unrealistic expectation that the next new method
(object-oriented anything, these days) will prove to be the long awaited "silver
bullet" with which the dreaded software development dragon can be slayed.

It is insidious because it carries enthusiasm that is short lived and a commitment


that is only slightly longer lived. Addressing this problem is outside the scope of
any particular method, though it must be done if a method is to be assessed on its
true merits. The best policy here is complete frankness on the benefits and
liabilities of a method, and an ability to adjust for these

Q4. Suggest situations where it is unwise or impossible to provide a


consistent user interface ?

ANS:

A consistent user interface may be impossible to produce for complex systems with a large
number of interface options. In such systems, there is a wide imbalance between the extent
of usage of different commands so for frequently used commands, it is desirable to have
short cuts. Unless all commands have short cuts, then consistency is impossible.

It may also be the case in complex systems that the entities manipulated are of quite
different types and it is inappropriate to have consistent operations on each of these types.

An example of such a system is an operating system interface. Even MacOS which has
attempted to be as consistent as possible has inconsistent operations that are liked by users.
For example, to delete a file it is dragged to the trash but dragging a disk image to the trash
does not delete it but unmounts that disk.

Q5. How Validation is different from Verification ?

ANS:
Verification and Validation (also known simply as V&V) are two parts of the same
software package. They are used in software project management, software testing, and
software engineering. It is the process by which a software system meets certain
specifications. It is also the process by which a software system fulfils the intended
purpose of its creation. It is also commonly known as software quality control.

Validation is the portion of the software checks and balances that checks that the product
design satisfies or fits the use for which it was intended. This is known as high level
checking (basically, informing the system that it built the right product). It carries out this
task using dynamic testing and a variety of other forms of review. Dynamic testing
specifically examines the physical response from the system to those variables that are not
constant and, in time, are prone to change. In a basic sense, validation ensures that the
product meets the needs of the user. It also ensures that the certain specifications were, in
fact, correct from the beginning of the program. Basically, validation lets you know if you
have built the right thing.

Verification is the portion of the software checks and balances that evaluates the software
to determine whether the products that are found in a given development phase satisfy the
conditions that were put forth at the beginning of that particular phase. In a basic sense,
verification ensures that the particular product has been built according to the requirements
and design specifications that were introduced at the beginning of the program. Quite
frankly, verification lets you know that the correct object was built correctly.

Beyond the software community, the definitions of verification and validation are
somewhat similar. In the modelling and simulation community, validation is the process by
which the degree of accuracy of a model, simulation, or federation of models and
simulations and their associated data can be determined. It also determines whether these
models, simulations, or federations therein are accurate representations of the real world
from the perspective of the use that was intended for the model, etc. Verification, on the
other hand, is the process by which the system determines whether a computer model,
simulation, or federation of models and simulations implementations and the content
associated with that data represents the conceptual descriptions and specifications of the
developer.

Verification:
1.It is a Quality improvement process.

2. It is involve with the reviewing and evaluating the process.

3. It is conducted by QA team.

4. Verification is Correctness.

5. Are we producing the product right?

Validation:

1. It is ensures the functionality.

2. It is conducted by development team with the help from QC team.

3. Validation is Truth.

4. Validation is the following process of verification.

5. Are we producing the right product?

Q6. Explain why adopting an approach to design that is based on


loosely coupled objects that hide information about their representation
should lead to a design which may be readily modified ?

ANS:

Object-oriented design is concerned with developing an object-oriented model of a


software system to implement the identified requirements. The objects in an object-
oriented design are related to the solution to the problem that is being solved. There may
be close relationships between some problem objects and some solution objects but the
designer inevitably has to add new objects and to transform problem objects to implement
the solution.

There are 2 major problems encountered when modifying systems:


Understanding which entities in a program are logically part of some greater
entity.

Ensuring that changes do not have unanticipated side-effects (i.e., a change


to one entity has an undesirable effect on some other entity).

Object-oriented development helps to reduce these problems as it supports the grouping of


entities (in classes) so program understanding is simplified. It also provides protection for
entities declared within objects so that access from outside the object is controlled (the
entity may not be accessible, its name may be accessible but not its representation, or it
may be fully accessible). This reduces the probability that changes to one part of the
system will have undesirable effects on some other part.

Object-oriented systems should be maintainable as the objects are independent. They may
be understood and modified as stand-alone entities. Changing the implementation of an
object or adding services should not affect other system objects. Because objects are
associated with things, there is often a clear mapping between real-world entities (such as
hardware components) and their controlling objects in the system. This improves the
understandability and hence the maintainability of the design.

Objects are potentially reusable components because they are independent encapsulations
of state and operations. Designs can be developed using objects that have been created in
previous designs. This reduces design, programming and validation costs. It may also lead
to the use of standard objects (hence improving design understandability) and reduces the
risks involved in software development.

You might also like