Professional Documents
Culture Documents
•The main aim of design engineering is to generate a model which shows firmness, delight and
commodity.
•Software design is an iterative process through which requirements are translated into the blueprint
for building the software.
Software quality guidelinesA design is generated using the recognizable architectural styles and
compose a good design characteristic of components and it is implemented in evolutionary manner
for testing.
•A design of the software must be modular i.e the software must be logically partitioned into
elements.
•In design, the representation of data , architecture, interface and components should be distinct.
•A design must carry appropriate data structure and recognizable data patterns.
•Design components must show the independent functional characteristic.
•A design creates an interface that reduce the complexity of connections between the components.
•A design must be derived using the repeatable method.
•The notations should be use in design which can effectively communicates its meaning.
Quality attributes
Functionality:
It evaluates the feature set and capabilities of the program.
Usability:
It is accessed by considering the factors such as human factor, overall aesthetics, consistency and
documentation.
Reliability:
It is evaluated by measuring parameters like frequency and security of failure, output result accuracy,
the mean-time-to-failure(MTTF), recovery from failure and the the program predictability.
Performance:
It is measured by considering processing speed, response time, resource consumption, throughput and
efficiency.
Supportability:
•It combines the ability to extend the program, adaptability, serviceability. These three term defines
the maintainability.
•Testability, compatibility and configurability are the terms using which a system can be easily
installed and found the problem easily.
•Supportability also consists of more attributes such as compatibility, extensibility, fault tolerance,
modularity, reusability, robustness, security, portability, scalability.
Design concepts
1. Abstraction
•A solution is stated in large terms using the language of the problem environment at the highest
level abstraction.
•The lower level of abstraction provides a more detail description of the solution.
•A sequence of instruction that contain a specific and limited function refers in a procedural
abstraction.
•A collection of data that describes a data object is a data abstraction.
2. Architecture
•The complete structure of the software is known as software architecture.
•Structure provides conceptual integrity for a system in a number of ways.
•The architecture is the structure of program modules where they interact with each other in a
specialized way.
•The components use the structure of data.
•The aim of the software design is to obtain an architectural framework of a system.
•The more detailed design activities are conducted from the framework.
3.Patterns
A design pattern describes a design structure and that structure solves a particular design problem in a
specified content.
4. Modularity
•A software is separately divided into name and addressable components. Sometime they are called
as modules which integrate to satisfy the problem requirements.
•Modularity is the single attribute of a software that permits a program to be managed easily.
5.Information hiding
Modules must be specified and designed so that the information like algorithm and data presented in
a module is not accessible for other modules not requiring that information.
6. Functional independence
•The functional independence is the concept of separation and related to the concept of modularity,
abstraction and information hiding.
•The functional independence is accessed using two criteria i.e Cohesion and coupling.
Cohesion
•Cohesion is an extension of the information hiding concept.
•A cohesive module performs a single task and it requires a small interaction with the other
components in other parts of the program.
Coupling
Coupling is an indication of interconnection between modules in a structure of software.
7. Refinement
•Refinement is a top-down design approach.
•It is a process of elaboration.
•A program is established for refining levels of procedural details.
•A hierarchy is established by decomposing a statement of function in a stepwise manner till the
programming language statement are reached.
8. Refactoring
•It is a reorganization technique which simplifies the design of components without changing its
function behaviour.
•Refactoring is the process of changing the software system in a way that it does not change the
external behaviour of the code still improves its internal structure.
9. Design classes
•The model of software is defined as a set of design classes.
•Every class describes the elements of problem domain and that focus on features of the problem
which are user visible.
Architectural Design:
•A data store will reside at the center of this architecture and is accessed frequently by the other
components that update, add, delete or modify the data present within the store.
•The figure illustrates a typical data centered style. The client software access a central
repository. Variation of this approach are used to transform the repository into a blackboard
when data related to client or data of interest for the client change the notifications to client
software.
•This data-centered architecture will promote integrability. This means that the existing
components can be changed and new client components can be added to the architecture
without the permission or concern of other clients.
•Data can be passed among clients using blackboard mechanism.
Data flow architectures:
•This kind of architecture is used when input data to be transformed into output data through a
series of computational manipulative components.
•The figure represents pipe-and-filter architecture since it uses both pipe and filter and it has a
set of components called filters connected by pipes.
•Pipes are used to transmit data from one component to the next.
•Each filter will work independently and is designed to take data input of a certain form and
produces data output to the next filter of a specified form. The filters don’t require any
knowledge of the working of neighboring filters.
Call and Return architectures: It is used to create a program that is easy to scale and modify.
Many sub-styles exist within this category. Two of them are explained below.
•Remote procedure call architecture: This components is used to present in a main program or
sub program architecture distributed among multiple computers on a network.
•Main program or Subprogram architectures: The main program structure decomposes into
number of subprograms or function into a control hierarchy. Main program contains number of
subprograms that can invoke other components.
Object Oriented architecture: The components of a system encapsulate data and the operations
that must be applied to manipulate the data. The coordination and communication between the
components are established via the message passing.
1.Layered architecture:
•A number of different layers are defined with each layer performing a well-defined set of
operations. Each layer will do some operations that becomes closer to machine instruction
set progressively.
•At the outer layer, components will receive the user interface operations and at the inner
layers, components will perform the operating system interfacing(communication and
coordination with OS)
•Intermediate layers to utility services and application software functions.
Low level design: Low-level design (LLD) is a component-level design process that follows a step-
by-step refinement process. This process can be used for designing data structures, required software
architecture, source code and ultimately, performance algorithms. Overall, the data organization may
be defined during requirement analysis and then refined during data design work. Post-build, each
component is specified in detail. The LLD phase is the stage where the actual software components
are designed.During the detailed phase the logical and functional design is done and the design of
application structure is developed during the high-level design phase.
Design Phase
A design is the order of a system that connects individual components. Often, it can interact with
other systems. Design is important to achieve high reliability, low cost, and good maintain-ability. We
can distinguish two types of program design phases:
•Architectural or high-level design
•Detailed or low-level design
Purpose:
The goal of LLD or a low-level design document (LLDD) is to give the internal logical design of the
actual program code. Low-level design is created based on the high-level design. LLD describes the
class diagrams with the methods and relations between classes and program specs. It describes the
modules so that the programmer can directly code the program from the document.
A good low-level design document makes the program easy to develop when proper analysis is
utilized to create a low-level design document. The code can then be developed directly from the
low-level design document with minimal debugging and testing. Other advantages include lower cost
and easier maintenance.
Modularization
Modularization is a technique to divide a software system into multiple discrete and independent
modules, which are expected to be capable of carrying out task(s) independently. These modules may
work as basic constructs for the entire software. Designers tend to design modules such that they can
be executed and/or compiled separately and independently.
Modular design unintentionally follows the rules of ‘divide and conquer’ problem-solving strategy
this is because there are many other benefits attached with the modular design of a software.
Advantage of modularization:
•Smaller components are easier to maintain
•Program can be divided based on functional aspects
•Desired level of abstraction can be brought in the program
•Components with high cohesion can be re-used again
•Concurrent execution can be made possible Desired from security aspect
Design structure charts:
A Structure Chart (SC) in software engineering and organizational theory is a chart which shows
the breakdown of a system to its lowest manageable levels. They are used in structured
programming to arrange program modules into a tree. Each module is represented by a box, which
contains the module's name. The tree structure visualizes the relationships between modules. A
structure chart is a top-down modular design tool, constructed of squares representing the different
modules in the system, and lines that connect them. The lines represent the connection and or
ownership between activities and subactivities as they are used in organization charts.
A structure chart depicts.
•the size and complexity of the system, and
•number of readily identifiable functions and modules within each function and
•whether each identifiable function is a manageable entity or should be broken down into smaller
components.
A structure chart is also used to diagram associated elements that comprise a run stream or thread. It
is often developed as a hierarchical diagram, but other representations are allowable. The
representation must describe the breakdown of the configuration system into subsystems and the
lowest manageable level. An accurate and complete structure chart is the key to the determination of
the configuration items (CI), and a visual representation of the configuration system and the internal
interfaces among its CIs(define CI clearly). During the configuration control process, the structure
chart is used to identify CIs and their associated artifacts that a proposed change may impact.
A flowchart is a type of diagram that represents a workflow or process. A flowchart can also be
defined as a diagrammatic representation of an algorithm, a step-by-step approach to solving a task.
The flowchart shows the steps as boxes of various kinds, and their order by connecting the boxes
with arrows. This diagrammatic representation illustrates a solution model to a given problem.
Flowcharts are used in analyzing, designing, documenting or managing a process or program in
various fields.
Types
•Document flowcharts, showing controls over a document-flow through a system
Data flowcharts, showing controls over a data-flow in a deled from the perspective of different
usersystem
•System flowcharts, showing controls at a physical or resource level
•Program flowchart, showing the controls in a program within a system.
For the solutions of the software need to be developed from the ground level, top-down design best
suits the purpose.
•Advantages:
•The mail advantage of top down approach is that its strong focus on requirements helps to
make a design responsive according to its requirements.
Disadvantages:
•Project and system boundries tends to be application specification oriented. Thus it is more
likely that advantages of component reuse will be missed.
•The system is likely to miss, the benefits of a well-structured, simple architecture.
Bottom up:
Bottom-up approach:
The design starts with the lowest level components and subsystems. By using these components, the
next immediate higher level components and subsystems are created or composed. The process is
continued till all the components and subsystems are composed into a single component, which is
considered as the complete system. The amount of abstraction grows high as the design moves to
more high levels.
By using the basic information existing system, when a new system needs to be created, the bottom
up strategy suits the purpose.
Advantages:
•The economics can result when general solutions can be reused.
•It can be used to hide the low-level details of implementation and be merged with top-down
technique.
Disadvantages:
•It is not so closely related to the structure of the problem.
•High quality bottom-up solutions are very hard to construct.
•It leads to proliferation of ‘potentially useful’ functions rather than most approprite ones.
Function oriented design:
Function Oriented Design Strategies:
Function Oriented Design Strategies are as follows:
1.Data Flow Diagram (DFD):
A data flow diagram (DFD) maps out the flow of information for any process or system. It uses
defined symbols like rectangles, circles and arrows, plus short text labels, to show data inputs,
outputs, storage points and the routes between each destination.
2.Data Dictionaries:
Data dictionaries are simply repositories to store information about all data items defined in
DFDs. At the requirement stage, data dictionaries contains data items. Data dictionaries include
Name of the item, Aliases (Other names for items), Description / purpose, Related data items,
Range of values, Data structure definition / form.
3.Structure Charts:
It is the hierarchical representation of system which partitions the system into black boxes
(functionality is known to users but inner details are unknown). Components are read from top
to bottom and left to right. When a module calls another, it views the called module as black
box, passing required parameters and receiving results.
Unit 4:
Top Down and Bottom up programming:
Top-down and bottom-up are both strategies of information processing and knowledge ordering,
used in a variety of fields including software, humanistic and scientific theories , and management
and organization. In practice, they can be seen as a style of thinking, teaching, or leadership.
A top-down approach (also known as stepwise design and in some cases used as a synonym
of decomposition) is essentially the breaking down of a system to gain insight into its compositional
sub-systems in a reverse engineering fashion. In a top-down approach an overview of the system is
formulated, specifying, but not detailing, any first-level subsystems. Each subsystem is then refined
in yet greater detail, sometimes in many additional subsystem levels, until the entire specification is
reduced to base elements. A top-down model is often specified with the assistance of "black boxes",
which makes it easier to manipulate.
A bottom-up approach is the piecing together of systems to give rise to more complex systems, thus
making the original systems sub-systems of the emergent system. Bottom-up processing is a type
of information processing based on incoming data from the environment to form a perception. From
a cognitive psychology perspective, information enters the eyes in one direction (sensory input, or the
"bottom"), and is then turned into an image by the brain that can be interpreted and recognized as a
perception (output that is "built up" from processing to final cognition). In a bottom-up approach the
individual base elements of the system are first specified in great detail. These elements are then
linked together to form larger subsystems, which then in turn are linked, sometimes in many levels,
until a complete top-level system is formed.
Structures programming:
Structured programming is a programming paradigm aimed at improving the clarity, quality, and
development time of a computer program by making extensive use of the structured control flow
constructs of selection (if/then/else) and repetition (while and for), block structures, and subroutines.
It emerged in the late 1950s with the appearance of the ALGOL 58 and ALGOL 60 programming
languages, with the latter including support for block structures. Contributing factors to its popularity
and widespread acceptance, at first in academia and later among practitioners, include the discovery
of what is now known as the structured program theorem in 1966, and the publication of the
influential "Go To Statement Considered Harmful" open letter in 1968 +by Dutch computer
scientist Edsger W. Dijkstra, who coined the term "structured programming".
Structured programming is most frequently used with deviations that allow for clearer programs in
some particular cases, such as when exception handling has to be performed.
Elements
Control structures
Following the structured program theorem, all programs are seen as composed of control structures:
•"Sequence"; ordered statements or subroutines executed in sequence.
•"Selection"; one or a number of statements is executed depending on the state of the program. This
is usually expressed with keywords such as if..then..else..endif.
•"Iteration"; a statement or block is executed until the program reaches a certain state, or operations
have been applied to every element of a collection. This is usually expressed with keywords such
as while, repeat, for or do..until.
•"Recursion"; a statement is executed by repeatedly calling itself until termination conditions are met.
While similar in practice to iterative loops, recursive loops may be more computationally efficient,
and are implemented differently as a cascading stack.
Structured programming languages: It is possible to do structured programming in any
programming language, though it is preferable to use something like a procedural programming
language. Some of the languages initially used for structured programming
include: ALGOL, Pascal, PL/I and Ada, but most new procedural programming languages since that
time have included features to encourage structured programming, and sometimes deliberately left
out feature notably GOTO – in an effort to make unstructured programming more
difficult. Structured programming (sometimes known as modular programming) enforces a logical
structure on the program being written to make it more efficient and easier to understand and modify.
What is code Inspection?
Code Inspection is the most formal type of review, which is a kind of static testing to avoid the defect
multiplication at a later stage.
The main purpose of code inspection is to find defects and it can also spot any process improvement.
An inspection report lists the findings, which include metrics that can be used to aid improvements to
the process as well as correcting defects in the document under review.
Preparation before the meeting is essential, which includes reading of any source documents to
ensure consistency.
Inspections are often led by a trained moderator, who is not the author of the code.
The inspection process is the most formal type of review based on rules and checklists and makes use
of entry and exit criteria.
It usually involves peer examination of the code and each one has a defined set of roles.
After the meeting, a formal follow-up process is used to ensure that corrective action is completed in
a timely manner.
Although direct discovery of quality problems is often the main goal, code reviews are usually
performed to reach a combination of goals :
•Better code quality – improve internal code quality and maintainability (readability, uniformity,
understandability, ...)
•Finding defects – improve quality regarding external aspects, especially correctness, but also find
performance problems, security vulnerabilities, injected malware, ...
•Learning/Knowledge transfer – help in transferring knowledge about the codebase, solution
approaches, expectations regarding quality, etc; both to the reviewers as well as to the author
•Increase sense of mutual responsibility – increase a sense of collective code ownership and
solidarity
•Finding better solutions – generate ideas for new and better solutions and ideas that transcend the
specific code at hand.
•Complying to QA guidelines – Code reviews are mandatory in some contexts, e.g., air traffic
software
Compliance with Design and Coding Standards. There are four main drivers for using one:
Every development team should use a standard. Even the most experienced developer could
introduce a coding defect without realizing it. And that one defect could lead to a minor glitch. Or
worse, a serious security breach.
Testing objectives:
•Software testing is a crucial element in the software development life cycle (SDLC), which can help
software engineers save time & money of organizations by finding errors and defects during the early
stages of software development. With the assistance of this process one can examine various
components associated with the application and guarantee their appropriateness.
The goals and objectives of software testing are numerous, which when achieved help developers
build a defectless and error free software and application that has exceptional performance,
quality,effectiveness, security, among other things. Though the objective of testing can vary from
company to company and project to project, there are some goals that are similar for all. These
objectives are:
1.Verification: A prominent objective of testing is verification, which allows testers to
confirm that the software meets the various business and technical requirements stated by the
client before the inception of the whole project. These requirements and specifications guide
the design and development of the software, hence are required to be followed rigorously.
Moreover, compliance with these requirements and specifications is important for the success
of the project as well as to satisfy the client.
2.Validation: Confirms that the software performs as expected and as per the requirements of
the clients. Validation involves checking the comparing the final output with the expected
output and then making necessary changes if their is a difference between the two.
3.Defects: The most important purpose of testing is to find different defects in the software
to prevent its failure or crash during implementation or go live of the project. Defects if left
undetected or unattended can harm the functioning of the software and can lead to loss of
resources, money, and reputation of the client. Therefore, software testing is executed
regularly during each stage of software development to find defects of various kinds. The
ultimate source of these defects can be traced back to a fault introduced during the
specification, design, development, or programming phases of the software.
4.Providing Information: With the assistance of reports generated during the process of
software testing, testers can accumulate a variety of information related to the software and
the steps taken to prevent its failure. These, then can be shared with all the stakeholders of the
project for better understanding of the project as well as to establish transparency between
members.
5.Preventing Defects: During the process of testing the aim of testes to identify defects and
prevent them from occurring aging in the future. To accomplish this goal, software is
tested rigorously by a independent testers, who are not responsible for software development.
6.Quality Analysis: Testing helps improve the quality of the software by constantly
measuring and verifying its design and coding. Additionally, various types of testing
techniques are used by testers, which help them achieve the desired software quality.
8.For Optimum User Experience: Easy software and application accessibility and optimum
user experience are two important requirements that need to be accomplished for the success
of any project as well as to increase the revenue of the client. Therefore, to ensure this
software is tested again and again by the testers with the assistance of stress testing, load
testing, spike testing, etc.
•Enables team to build a bug free software and application with exceptional quality and
features.
•Saves time and money by testing the software during the early stages of development.
•Offers better user experience and customer services by building a effective and efficient
software.
•Monitors and assess various activities during the process of software development.
•Helps identify various human errors, which if executed can lead to system failure.
Unit testing:
Unit Testing
It focuses on smallest unit of software design. In this we test an individual unit or group of inter
related units.It is often done by programmer by using sample input and observing its corresponding
outputs.
Example:
c) Incorrect initialization
Integration Testing
The objective is to take unit tested components and build a program structure that has been dictated
by design.Integration testing is testing in which a group of components are combined to produce
output.
Integration testing is of four types: (i) Top down (ii) Bottom up (iii) Sandwich (iv) Big-Bang
Example
Acceptance Testing
An Acceptance Test is performed by the client and verifies whether the end to end the flow of the
system is as per the business requirements or not and if it is as per the needs of the end-user. Client
accepts the software only when all the features and functionalities work as expected.
It is the last phase of the testing, after which the software goes into production. This is also called
User Acceptance Testing (UAT).
Regression testing:
Testing an application as a whole for the modification in any module or functionality is termed as
Regression Testing. It is difficult to cover all the system in Regression Testing, so
typically Automation Testing Tools are used for these types of testing.
Every time new module is added leads to changes in program. This type of testing make sure that
whole component works properly even after adding components to the complete program.
Example
In school record suppose we have module staff, students
Structural testing:: Structural testing is the type of testing carried out to test the structure of code.
It is also known as White Box testing or Glass Box testing. This type of testing requires knowledge
of the code, so, it is mostly done by the developers. It is more concerned with how system does it
rather than the functionality of the system. It provides more coverage to the testing. For ex, to test
certain error message in an application, we need to test the trigger condition for it, but there must be
many trigger for it. It is possible to miss out one while testing the requirements drafted in SRS. But
using this testing, the trigger is most likely to be covered since structural testing aims to cover all the
nodes and paths in the structure of code.
FUNCTIONAL TESTING is a type of software testing whereby the system is tested against the
functional requirements/specifications. Functions (or features) are tested by feeding them input and
examining the output. Functional testing ensures that the requirements are properly satisfied by the
application. This type of testing is not concerned with how processing occurs, but rather, with the
results of processing. It simulates actual system usage but does not make any system structure
assumptions. During functional testing, Black Box Testing technique is used in which the internal
logic of the system being tested is not known to the tester. Functional testing is normally performed
during the levels of System Testing and Acceptance Testing.Typically, functional testing involves the
following steps:
+Test Stub:Test stubs are mainly used in incremental testing's top-down approach. Stubs are
computer programs that act as temporary replacement for a called module and give the same output
as the actual product or software.
What is Alpha Testing? Alpha testing is a type of acceptance testing; performed to identify all
possible issues/bugs before releasing the product to everyday users or the public. The focus of this
testing is to simulate real users by using a black box and white box techniques. The aim is to carry
out the tasks that a typical user might perform. Alpha testing is carried out in a lab environment and
usually, the testers are internal employees of the organization. To put it as simple as possible, this
kind of testing is called alpha only because it is done early on, near the end of the development of the
software, and before beta testing.
What is Beta Testing? Beta Testing of a product is performed by "real users" of the software
application in a "real environment" and can be considered as a form of external Beta version of the
software is released to a limited number of end-users of the product to obtain feedback on the product
quality. Beta testing reduces product failure risks and provides increased quality of the product
through customer validation.It is the final test before shipping a product to the customers. Direct
feedback from customers is a major advantage of Beta Testing. This testing helps to tests the product
in customer's environment.
Public Beta Testing: Product is publicly released to the outside world via online channels and data
can be gathered from anyone. Based on feedback, product improvements can be done. For example,
Microsoft conducted the largest of all Beta Tests for its OS Windows 8 before officially releasing it.
Technical Beta Testing: Product is released to the internal group of an organization and gathers
feedback/data from the employees of the organization.
Focused Beta: Product is released to the market for gathering feedback on specific features of the
program. For example, important functionality of the software.
Phases of Testing
Alpha and Beta tests are typically carried for "off-the-shelf" software or product-oriented companies.
The Phases of Testing for a product company typically varies from a service-oriented organization.
Following is the testing phase adopted by product firms
Pre-Alpha:- Software is a prototype. UI is complete. But not all features are completed. At this stage,
software is not published.
Alpha: Software is near its development and is internally tested for bugs/issues
Beta: Software is stable and is released to a limited user base. The goal is to get customer feedback
on the product and make changes in software accordingly
Release Candidate (RC): Based on the feedback of Beta Test, you make changes to the software and
want to test out the bug fixes. At this stage, you do not want to make radical changes in functionality
but just check for bugs. RC is also put out to the public
Release: All works, software is released to the public.
Note: Above is a standard definition of the Testing stages but in order to garner marketing buzz,
companies combine stages like "pre-alpha beta", "pre-beta" etc.
Software Maintenance
Software Maintenance is the process of modifying a software product after it has been delivered to
the customer. The main purpose of software maintenance is to modify and update software
application after delivery to correct faults and to improve performance.
1. Inventory Analysis:
Every software organisation should have an inventory of all the applications.
•Inventory can be nothing more than a spreadsheet model containing information that provides
a detailed description of every active application.
•By sorting this information according to business criticality, longevity, current maintainability
and other local important criteria, candidates for re-engineering appear.
•Resource can then be allocated to candidate application for re-engineering work.
2. Document reconstructing:
Documentation of a system either explains how it operate or how to use it.
•Documentation must be updated.
•It may not be necessary to fully document an application.
•The system is business critical and must be fully re-documented.
3. Reverse Engineering:
Reverse engineering is a process of design recovery. Reverse engineering tools extracts data,
architectural and proccedural design information from an existing program.
4. Code Reconstructing:
•To accomplish code reconstructing, the source code is analysed using a reconstructing tool.
Violations of structured programming construct are noted and code is then reconstruct.
•The resultant restructured code is reviewed and tested to ensure that no anomalies have been
introduced.
5. Data Restructuring:
•Data restructuring begins with the reverse engineering activity.
•Current data architecture is dissecred, and necessary data models are defined.
•Data objects and attributes are identified, and existing data structure are reviewed for quality.
6.Forward Engineering:
Forward Engineering also called as renovation or reclamation not only for recovers design
information from existing software but uses this information to alter or reconstitute the existing
system in an effort to improve its overall quality.