Professional Documents
Culture Documents
Se Notes58124e7b870a76.38794814
Se Notes58124e7b870a76.38794814
SOFTWARE DESIGN
Software Design is a process to transform user requirements into some suitable form, which
helps the programmer in software coding and implementation.
When a software program is modularized, its tasks are divided into several modules based on
some characteristics. As we know, modules are set of instructions put together in order to
achieve some tasks. They are though, considered as single entity but may refer to each other to
work together. There are measures by which the quality of a design of modules and their
interaction among them can be measured. These measures are called coupling and cohesion.
Cohesion
Cohesion is a measure that defines the degree of intra-dependability within elements of a
module. The greater the cohesion, the better is the program design.
Co-incidental cohesion - It is unplanned and random cohesion, which might be the result of
breaking the program into smaller modules for the sake of modularization. Because it is
unplanned, it may serve confusion to the programmers and is generally not-accepted.
Logical cohesion - When logically categorized elements are put together into a module, it is
called logical cohesion.
Temporal Cohesion - When elements of module are organized such that they are processed
at a similar point in time, it is called temporal cohesion.
Procedural cohesion - When elements of module are grouped together, which are executed
sequentially in order to perform a task, it is called procedural cohesion.
Communicational cohesion - When elements of module are grouped together, which are
executed sequentially and work on same data (information), it is called communicational
cohesion.
Sequential cohesion - When elements of module are grouped because the output of one
element serves as input to another and so on, it is called sequential cohesion.
Coupling
Coupling is a measure that defines the level of inter-dependability among modules of a
program. It tells at what level the modules interfere and interact with each other. The lower the
coupling, the better the program.
Content coupling - When a module can directly access or modify or refer to the content of
another module, it is called content level coupling.
Common coupling- When multiple modules have read and write access to some global data,
it is called common or global coupling.
Control coupling- Two modules are called control-coupled if one of them decides the
function of the other module or changes its flow of execution.
Stamp coupling- When multiple modules share common data structure and work on
different part of it, it is called stamp coupling.
Data coupling- Data coupling is when two modules interact with each other by means of
passing data (as parameter). If a module passes data structure as parameter, then the receiving
module should use all its components.
This design mechanism divides the whole system into smaller functions, which provides means
of abstraction by concealing the information and their operation.. These functional modules can
share information among themselves by means of information passing and using information
available globally.
Another characteristic of functions is that when a program calls a function, the function changes
the state of the program, which sometimes is not acceptable by other modules. Function
oriented design works well where the system state does not matter and program/functions work
on input rather than on a state.
Design Process
The whole system is seen as how data flows in the system by means of data flow diagram.
DFD depicts how functions changes data and state of entire system.
The entire system is logically broken down into smaller units known as functions on the basis
of their operation in the system.
Objects - All entities involved in the solution design are known as objects. For example,
person, banks, company and customers are treated as objects. Every entity has some
attributes associated to it and has some methods to perform on the attributes.
Encapsulation - In OOD, the attributes (data variables) and methods (operation on the data)
are bundled together is called encapsulation. Encapsulation not only bundles important
information of an object together, but also restricts access of the data and methods from the
outside world. This is called information hiding.
Inheritance - OOD allows similar classes to stack up in hierarchical manner where the lower
or sub-classes can import, implement and re-use allowed variables and methods from their
immediate super classes. This property of OOD is known as inheritance. This makes it easier
to define specific class and to create generalized classes from specific ones.
A solution design is created from requirement or previous used system and/or system
sequence diagram.
Objects are identified and grouped into classes on behalf of similarity in attribute
characteristics.
Input Controls: buttons, text fields, checkboxes, radio buttons, dropdown lists,
list boxes, toggles, date field
Navigational Components: breadcrumb, slider, search field, pagination, slider,
tags, icons
Informational Components: tooltips, icons, progress bar, notifications, message
boxes, modal windows
Containers: accordion
There are times when multiple elements might be appropriate for displaying
content. When this happens, it’s important to consider the trade-offs. For example,
sometimes elements that can help save you space, put more of a burden on the user
mentally by forcing them to guess what is within the dropdown or what the element
might be.
Best Practices for Designing an Interface
Everything stems from knowing your users, including understanding their goals,
skills, preferences, and tendencies. Once you know about your user, make sure to
consider the following when designing your interface:
Keep the interface simple. The best interfaces are almost invisible to the user.
They avoid unnecessary elements and are clear in the language they use on labels
and in messaging.
Create consistency and use common UI elements. By using common
elements in your UI, users feel more comfortable and are able to get things done
more quickly. It is also important to create patterns in language, layout and design
throughout the site to help facilitate efficiency. Once a user learns how to do
something, they should be able to transfer that skill to other parts of the site.
Be purposeful in page layout. Consider the spatial relationships between items
on the page and structure the page based on importance. Careful placement of
items can help draw attention to the most important pieces of information and can
aid scanning and readability.
Strategically use color and texture. You can direct attention toward or redirect
attention away from items using color, light, contrast, and texture to your
advantage.
Use typography to create hierarchy and clarity. Carefully consider how you use
typeface. Different sizes, fonts, and arrangement of the text to help increase
scanability, legibility and readability.
Make sure that the system communicates what’s happening. Always inform
your users of location, actions, changes in state, or errors. The use of various UI
elements to communicate status and, if necessary, next steps can reduce frustration
for your user.
Think about the defaults. By carefully thinking about and anticipating the goals
people bring to your site, you can create defaults that reduce the burden on the
user. This becomes particularly important when it comes to form design where
you might have an opportunity to have some fields pre-chosen or filled out.
Software Reliability
Software reliability is also defined as the probability that a software system fulfills its assigned
task in a given environment for a predefined number of input cases, assuming that the hardware
and the inputs are free of error
where:
Ri is the reliability allocated to the ith subsystem/component.
f is the functional relationship between the subsystem/component and the system.
Rs is the required system reliability.
Several algorithms for reliability allocation have been developed [1]:
Equal apportionment
AGREE apportionment
ARINC apportionment
Feasibility of Objectives apportionment
Repairable Systems apportionment
UNIT-IV
SOFTWARE TESTING
Testing is the process of evaluating a system or its component(s) with the intent to find whether
it satisfies the specified requirements or not.
Testing is executing a system in order to identify any gaps, errors, or missing requirements in
contrary to the actual requirements.
Software Process : Process defines a framework for a set of Key Process Areas (KPAs) that
must be established for effective delivery of software engineering technology. This establishes
the context in which technical methods are applied, work products such as models, documents,
data, reports, forms, etc. are produced, milestones are established, quality is ensured, and
change is properly managed.
Functional testing is a quality assurance (QA) process and a type of black-box testing that
bases its test cases on the specifications of the software component under test. Functions are
tested by feeding them input and examining the output, and internal program structure is rarely
considered (not like in white-box testing). Functional testing usually describes what the system
does.
So in above example we can divide our test cases into three equivalence classes of some
valid and invalid inputs.
Test cases for input box accepting numbers between 1 and 1000 using Equivalence
Partitioning:
One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a
valid test case. If you select other values between 1 and 1000 then result is going to be
same. So one test case for valid input data should be sufficient.
Input data class with all values below lower limit. I.e. any value below 1, as a invalid
input data test case.
Input data with any value greater than 1000 to represent third invalid input class.
So using equivalence partitioning you have categorized all possible test cases into three
classes. Test cases with other values from any class should give you the same result.
We have selected one representative from every input class to design our test cases. Test
case values are selected in such a way that largest number of attributes of equivalence
class can be exercised.
Test cases for input box accepting numbers between 1 and 1000 using Boundary value
analysis:
Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and
1000 in our case.
Test data with values just below the extreme edges of input domains i.e. values 0 and
999.
Decision table testing is black box test design technique to determine the test
scenarios for complex business logic.
We can apply Equivalence Partitioning and Boundary Value Analysis techniques to only
specific conditions or inputs. Although, if we have dissimilar inputs that result in
different actions being taken or secondly we have a business rule to test that there are
different combination of inputs which result in different actions. We use decision table to
test these kinds of rules or logic.
Why Decision table is important?
Decision tables are very much helpful in test design technique – it helps testers to search
the effects of combinations of different inputs and other software states that must
correctly implement business rules. Also, provides a regular way of stating complex
business rules, that’s helpful for developers as well as for testers. Testing combinations
can be a challenge, as the number of combinations can often be huge. It assists in
development process with developer to do a better job. Testing with all combination
might be unrealistic or unfeasible. We have to be happy with testing just a small subset of
combinations but making the option of which combinations to test and which to leave out
is also significant. If you do not have a efficient way of selecting combinations, an
arbitrary subset will be used and this may well result in an ineffective test effort.
A decision table is basically an outstanding technique used in both testing and
requirements management. It is a structured exercise to prepare requirements when
dealing with complex business rules. Also, used in model complicated logic.
Way to use decision tables in test designing
Firstly; get to know a suitable function or subsystem that acts according to a combination of
inputs or events. Taken system should be with fewer inputs or else combinations will become
impossible. Always better to take maximum numbers of conditions, split them into subsets
and use these subsets one at a time. After getting features that need to be combined, add them
to a table showing all combinations of “Yes” and “No” for each of the feature.
The Cause-Effect Graphing Technique is a black box testing technique which captures the
relationships between specific combinations of inputs(causes) and outputs (effects). It deals with
specific cases & avoids the combinatorial explosions. The graph includes a number of
intermediate nodes linking causes and effects. Each Causes/effects are represented as nodes of a
cause effect graph.
Structural testing, also known as glass box testing or white box testing is an approach where the
tests are derived from the knowledge of the software's structure or internal implementation.
The other names of structural testing includes clear box testing, open box testing, logic driven
testing or path driven testing.
Branch Coverage - This technique is running a series of tests to ensure that all branches are
tested at least once.
corresponds to testing all possible paths which means that each statement and branch are
covered.
Branch Testing = (Number of decisions outcomes tested / Total Number of decision Outcomes) x 100 %
Path Coverage = (Number paths exercised / Total Number of paths in the program) x 100 %
Spots the Dead Code or other issues with respect to best programming practices.
There are no defects that exist in the system other than those that affect control flow
Decision to Decision path (D-D) - The CFG can be broken into various Decision to
Decision paths and then collapsed into individual nodes.
Independent (basis) paths - Independent path is a path through a DD-path graph which
cannot be reproduced from other paths by other methods.
These ambiguities might cause failures in the software if not fixed and can easily pass through
testing phase undetected.
It is the most powerful method to detect hidden defects, which might be impossible to
identify using the conventional testing techniques.
Tools such as Insure++ help us to find defects in the code using the state-of-the-art.
Debugging and Maintaining the product would be more easier than ever.
Statement Mutations: Changes done to the statements by deleting or duplicating the line
which might arise when a developer is copy pasting the code from somewhere else.
The main aim is to isolate each unit of the system to identify, analyze and fix the defects.
Unit Tests, when integrated with build gives the quality of the build as well.
White Box Testing - used to test each one of those functions behaviour is tested.
Gray Box Testing - Used to execute tests, risks and assessment methods.
Integration Strategies:
Big-Bang Integration
Top Down Integration
Bottom Up Integration
Hybrid Integration
The SIT can be performed with minimum usage of testing tools, verified for the interactions
exchanged and the behaviour of each data field within individual layer is investigated. After the
integration, there are three main states of data flow:
Inter-system testing
Pairwise testing
What is Debugging?
It is a systematic process of spotting and fixing the number of bugs, or defects, in a piece of
software so that the software is behaving as expected. Debugging is harder for complex systems
in particular when various subsystems are tightly coupled as changes in one system or interface
may cause bugs to emerge in another.
Debugging is a developer activity and effective debugging is very important before testing
begins to increase the quality of the system. Debugging will not give confidence that the system
meets its requirements completely but testing gives confidence.
Indepth knowledge about the programming language is necessary to perform white box
testing.
Testing Tools:
Tools from a software testing context can be defined as a product that supports one or more test
activities right from planning, requirements, creating a build, test execution, defect logging and
test analysis.
Classification of Tools
Tools can be classified based on several parameters. They include:
Types of Tools:
S.No. Tool Type Used for Used by
4. Test data Preparation Tools Analysis and Design, Test data generation Testers
The Constraints such as budgets, time and other requirements are noted.
Evaluating the options and Shortlisting the ones that are meets the requirement
Developing the Proof of Concept which captures the pros and cons
Create a Pilot Project using the selected tool within a specified team
Many organizations around the globe develop and implement different standards to improve the
quality needs of their software. This chapter briefly describes some of the widely used standards
related to Quality Assurance and Testing.
ISO/IEC 9126
This standard deals with the following aspects to determine the quality of a software
application:
Quality model
External metrics
Internal metrics
This standard presents some set of quality attributes for any software such as:
Functionality
Reliability
Usability
Efficiency
Maintainability
Portability
The above-mentioned quality attributes are further divided into sub-factors, which you can
study when you study the standard in detail.
ISO/IEC 9241-11
Part 11 of this standard deals with the extent to which a product can be used by specified users
to achieve specified goals with Effectiveness, Efficiency and Satisfaction in a specified context
of use.
This standard proposed a framework that describes the usability components and the
relationship between them. In this standard, the usability is considered in terms of user
performance and satisfaction. According to ISO 9241-11, usability depends on context of use
and the level of usability will change as the context changes.
ISO/IEC 25000:2005
ISO/IEC 25000:2005 is commonly known as the standard that provides the guidelines for
Software Quality Requirements and Evaluation (SQuaRE). This standard helps in organizing
and enhancing the process related to software quality requirements and their evaluations. In
reality, ISO-25000 replaces the two old ISO standards, i.e. ISO-9126 and ISO-14598.
Reference Models
General guide
ISO/IEC 12119
This standard deals with software packages delivered to the client. It does not focus or deal with
the clients’ production process. The main contents are related to the following items:
Instructions for testing a delivered software package against the specified requirements.
Miscellaneous
Some of the other standards related to QA and Testing processes are mentioned below:
Standard Description
IEEE 829 A standard for the format of documents used in different stages of software testing.
IEEE 12207 A standard for software life cycle processes and life cycle data.
Software Maintenance
Software maintenance is widely accepted part of SDLC now a days. It stands for all the
modifications and updations done after the delivery of software product. There are number of
reasons, why modifications are required, some of them are briefly mentioned below:
Market Conditions - Policies, which changes over the time, such as taxation and newly
introduced constraints like, how to maintain bookkeeping, may trigger need for
modification.
Client Requirements - Over the time, customer may ask for new features or functions in the
software.
Host Modifications - If any of the hardware and/or platform (such as operating system) of
the target host changes, software changes are needed to keep adaptability.
Organization Changes - If there is any business level change at client end, such as
reduction of organization strength, acquiring another company, organization venturing into
new business, need to modify in the original software may arise.
Types of maintenance
In a software lifetime, type of maintenance may vary based on its nature. It may be just a
routine maintenance tasks as some bug discovered by some user or it may be a large event in
itself based on maintenance size or nature. Following are some types of maintenance based on
their characteristics:
Adaptive Maintenance - This includes modifications and updations applied to keep the
software product up-to date and tuned to the ever changing world of technology and
business environment.
Perfective Maintenance - This includes modifications and updates done in order to keep
the software usable over long period of time. It includes new features, new user
requirements for refining the software and improve its reliability and performance.
Cost of Maintenance
Reports suggest that the cost of maintenance is high. A study on estimating software
maintenance found that the cost of maintenance is as high as 67% of the cost of entire software
process cycle.
On an average, the cost of software maintenance is more than 50% of all SDLC phases. There
are various factors, which trigger maintenance cost go high, such as:
Older softwares, which were meant to work on slow machines with less memory and storage
capacity cannot keep themselves challenging against newly coming enhanced softwares on
modern hardware.
Most maintenance engineers are newbie and use trial and error method to rectify problem.
Often, changes made can easily hurt the original structure of the software, making it hard for
any subsequent changes.
Changes are often left undocumented which may cause more conflicts in future.
Software-end factors affecting Maintenance Cost
Programming Language
Dependence on external environment
Maintenance Activities
IEEE provides a framework for sequential maintenance process activities. It can be used in
iterative manner and can be extended so that customized items and processes can be included.
Analysis - The modification is analyzed for its impact on the system including safety and
security implications. If probable impact is severe, alternative solution is looked for. A
set of required modifications is then materialized into requirement specifications. The
cost of modification/maintenance is analyzed and estimation is concluded.
Design - New modules, which need to be replaced or modified, are designed against
requirement specifications set in the previous stage. Test cases are created for validation
and verification.
Implementation - The new modules are coded with the help of structured design created
in the design step.Every programmer is expected to do unit testing in parallel.
System Testing - Integration testing is done among newly created modules. Integration
testing is also carried out between new modules and the system. Finally the system is
tested as a whole, following regressive testing procedures.
Acceptance Testing - After testing the system internally, it is tested for acceptance with
the help of users. If at this state, user complaints some issues they are addressed or noted
to address in next iteration.
Delivery - After acceptance test, the system is deployed all over the organization either
by small update package or fresh installation of the system. The final testing takes place
at client end after the software is delivered.
Training facility is provided if required, in addition to the hard copy of user manual.
Software Re-engineering
When we need to update the software to keep it to the current market, without impacting its
functionality, it is called software re-engineering. It is a thorough process where the design of
software is changed and programs are re-written.
Legacy software cannot keep tuning with the latest technology available in the market. As the
hardware become obsolete, updating of software becomes a headache. Even if software grows
old with time, its functionality does not.
For example, initially Unix was developed in assembly language. When language C came into
existence, Unix was re-engineered in C, because working in assembly language was difficult.
Other than this, sometimes programmers notice that few parts of software need more
maintenance than others and they also need re-engineering.
Re-Engineering Process
Reverse Engineering
It is a process to achieve system specification by thoroughly analyzing, understanding the
existing system. This process can be seen as reverse SDLC model, i.e. we try to get higher
abstraction level by analyzing lower abstraction levels.
An existing system is previously implemented design, about which we know nothing. Designers
then do reverse engineering by looking at the code and try to get the design. With design in
hand, they try to conclude the specifications. Thus, going in reverse from code to system
specification.
Program Restructuring
It is a process to re-structure and re-construct the existing software. It is all about re-arranging
the source code, either in same programming language or from one programming language to a
different one. Restructuring can have either source code-restructuring and data-restructuring or
both.
Re-structuring does not impact the functionality of the software but enhance reliability and
maintainability. Program components, which cause errors very frequently can be changed, or
updated with re-structuring.
The dependability of software on obsolete hardware platform can be removed via re-structuring.
Forward Engineering
Forward engineering is a process of obtaining desired software from the specifications in hand
which were brought down by means of reverse engineering. It assumes that there was some
software engineering already done in the past.
Forward engineering is same as software engineering process with only one difference – it is
carried out always after reverse engineering.
Component reusability
A component is a part of software program code, which executes an independent task in the
system. It can be a small module or sub-system itself.
Example
The login procedures used on the web can be considered as components, printing system in
software can be seen as a component of the software.
Components have high cohesion of functionality and lower rate of coupling, i.e. they work
independently and can perform tasks without depending on other modules.
In OOP, the objects are designed are very specific to their concern and have fewer chances to be
used in some other software.
In modular programming, the modules are coded to perform specific tasks which can be used
across number of other software programs.
There is a whole new vertical, which is based on re-use of software component, and is known as
Component Based Software Engineering (CBSE).
Design - This is also a standard SDLC process step, where requirements are defined in terms
of software parlance. Basic architecture of system as a whole and its sub-systems are
created.
Specify Components - By studying the software design, the designers segregate the entire
system into smaller components or sub-systems. One complete software design turns into a
collection of a huge set of components working together.
The quick-fix model is an ad hoc approach used for maintaining the software system. The
objective of this model is to identify the problem and then fix it as quickly as possible.
The advantage is that it performs its work quickly and at a low cost. This model is an
approach to modify the software code with little consideration for its impact on the
overall structure of the software system.
Sometimes, users do not wait for long time. Rather, they require the modified software system to
be delivered to them in the least possible time. As a result, the software maintenance team needs
to use a quick-fix model to avoid the time consuming process of SMLC.
This model is beneficial when a single user is using the software system. As the user has proper
knowledge of the software system, it becomes easier to maintain the software system without
having need to manage the detailed documentation. This model is also advantageous in situations
when the software system is to be maintained with certain deadlines and limited resources.
However, this model is not suitable to fix errors for a longer period.
The iterative enhancement model, which was originally proposed as a process model, can be
easily adapted for maintaining a software system. It considers that the changes made to the
software system are iterative in nature. The iterative enhancement model comprises three stages,
namely, analysis of software system, classification of requested modifications, and
implementation of requested modifications.
The reuse-oriented model assumes that the existing program components can be reused to
perform maintenance.
SCM is also known as software control management. SCM aims to control changes introduced to
large complex software systems through reliable version selection and version control.
Software documentation is written text that accompanies computer software. It either explains
how it operates or how to use it, and may mean different things to people in different roles.