You are on page 1of 18

Waterfall model

Winston Royce introduced the Waterfall Model in 1970.This model has five
phases: Requirements analysis and specification, design, implementation, and
unit testing, integration and system testing, and operation and maintenance.
The steps always follow in this order and do not overlap. The developer must
complete every phase before the next phase begins. This model is named
"Waterfall Model", because its diagrammatic representation resembles a
cascade of waterfalls.

1. Requirements analysis and specification phase: The aim of this phase is


to understand the exact requirements of the customer and to document them
properly. Both the customer and the software developer work together so as
to document all the functions, performance, and interfacing requirement of
the software. It describes the "what" of the system to be produced and not
"how."In this phase, a large document called Software Requirement
Specification (SRS) document is created which contained a detailed
description of what the system will do in the common language.

2. Design Phase: This phase aims to transform the requirements gathered in


the SRS into a suitable form which permits further coding in a programming
language. It defines the overall software architecture together with high level
and detailed design. All this work is documented as a Software Design
Document (SDD).

3. Implementation and unit testing: During this phase, design is


implemented. If the SDD is complete, the implementation or coding phase
proceeds smoothly, because all the information needed by software
developers is contained in the SDD.
During testing, the code is thoroughly examined and modified. Small modules
are tested in isolation initially. After that these modules are tested by writing
some overhead code to check the interaction between these modules and the
flow of intermediate output.

4. Integration and System Testing: This phase is highly crucial as the quality
of the end product is determined by the effectiveness of the testing carried
out. The better output will lead to satisfied customers, lower maintenance
costs, and accurate results. Unit testing determines the efficiency of individual
modules. However, in this phase, the modules are tested for their interactions
with each other and with the system.

5. Operation and maintenance phase: Maintenance is the task performed by


every user once the software has been delivered to the customer, installed,
and operational.

When to use SDLC Waterfall Model?


Some Circumstances where the use of the Waterfall model is most suited are:

o When the requirements are constant and not changed regularly.


o A project is short
o The situation is calm
o Where the tools and technology used is consistent and is not changing
o When resources are well prepared and are available to use.

Advantages of Waterfall model


o This model is simple to implement also the number of resources that
are required for it is minimal.
o The requirements are simple and explicitly declared; they remain
unchanged during the entire project development.
o The start and end points for each phase is fixed, which makes it easy to
cover progress.
o The release date for the complete product, as well as its final cost, can
be determined before development.
o It gives easy to control and clarity for the customer due to a strict
reporting system.
Software Requirement
Specifications
The production of the requirements stage of the software development
process is Software Requirements Specifications (SRS) (also called
a requirements document). This report lays a foundation for software
engineering activities and is constructing when entire requirements are
elicited and analyzed. SRS is a formal report, which acts as a representation of
software that enables the customers to review whether it (SRS) is according to
their requirements. Also, it comprises user requirements for a system as well
as detailed specifications of the system requirements.

The SRS is a specification for a specific software product, program, or set of


applications that perform particular functions in a specific environment. It
serves several goals depending on who is writing it. First, the SRS could be
written by the client of a system. Second, the SRS could be written by a
developer of the system. The two methods create entirely various situations
and establish different purposes for the document altogether. The first case,
SRS, is used to define the needs and expectation of the users. The second
case, SRS, is written for various purposes and serves as a contract document
between customer and developer.

o Sequence Diagram: It shows the interactions between the objects in


terms of messages exchanged over time. It delineates in what order
and how the object functions are in a system.

o Activity Diagram: It models the flow of control from one activity to the
other. With the help of an activity diagram, we can model sequential
and concurrent activities. It visually depicts the workflow as well as what
causes an event to occur.

o Class Diagram: Class diagrams are one of the most widely used
diagrams. It is the backbone of all the object-oriented software
systems. It depicts the static structure of the system. It displays the
system's class, attributes, and methods. It is helpful in recognizing the
relation between different objects as well as classes.
o Component Diagram: It portrays the organization of the physical
components within the system. It is used for modeling execution
details. It determines whether the desired functional requirements have
been considered by the planned development or not, as it depicts the
structural relationships between the elements of a software system.
o Deployment Diagram: It presents the system's software and its
hardware by telling what the existing physical components are and
what software components are running on them. It produces
information about system software. It is incorporated whenever
software is used, distributed, or deployed across multiple machines
with dissimilar configurations.

o Use Case Diagram: It represents the functionality of a system by


utilizing actors and use cases. It encapsulates the functional
requirement of a system and its association with actors. It portrays the
use case view of a system.

Prototype Model
The prototype model requires that before carrying out the development of
actual software, a working prototype of the system should be built. A
prototype is a toy implementation of the system. A prototype usually turns
out to be a very crude version of the actual system, possible exhibiting limited
functional capabilities, low reliability, and inefficient performance as compared
to actual software. In many instances, the client only has a general view of
what is expected from the software product. In such a scenario where there is
an absence of detailed information regarding the input to the system, the
processing needs, and the output requirement, the prototyping model may be
employed.
Steps of Prototype Model

1. Requirement Gathering and Analyst


2. Quick Decision
3. Build a Prototype
4. Assessment or User Evaluation
5. Prototype Refinement
6. Engineer Product

Advantage of Prototype Model

1. Reduce the risk of incorrect user requirement


2. Good where requirement are changing/uncommitted
3. Regular visible process aids management
4. Support early product marketing
5. Reduce Maintenance cost.
6. Errors can be detected much earlier as the system is made side by side.

Disadvantage of Prototype Model

1. An unstable/badly implemented prototype often becomes the final


product.
2. Require extensive customer collaboration

o Costs customer money


o Needs committed customer
o Difficult to finish if customer withdraw
o May be too customer specific, no broad market

3. Difficult to know how long the project will last.


4. Easy to fall back into the code and fix without proper requirement
analysis, design, customer evaluation, and feedback.
5. Prototyping tools are expensive.
6. Special tools & techniques are required to build a prototype.
7. It is a time-consuming process.

Architectural Design in Software


Engineering
Architectural design in software engineering is about decomposing
the system into interacting components. It is expressed as a block
diagram defining an overview of the system structure, features of
the components, and how these components communicate with
each other to share data. It identifies the components that are
necessary for developing a computer-based system and
communication between them i.e. relationship between these
components. It defines the structure and properties of the
components that are involved in the system and also the
interrelationships between these components. The architectural
design process is about identifying the components i.e. subsystems
that makeup the system and structure of the sub-system and
they’re interrelationship. It is an early stage of the system design
phase. It acts as a link between specification requirements and the
design process.

Software Design
Software design is a mechanism to transform user requirements into some
suitable form, which helps the programmer in software coding and
implementation. It deals with representing the client's requirement, as
described in SRS (Software Requirement Specification) document, into a form,
i.e., easily implementable using programming language.

The software design phase is the first step in SDLC (Software Design Life
Cycle), which moves the concentration from the problem domain to the
solution domain. In software design, we consider the system to be a set of
components or modules with clearly defined behaviors & boundaries.

Verification:

Verification is the process of checking that a software achieves its goal without any
bugs. It is the process to ensure whether the product that is developed is right or not.
It verifies whether the developed product fulfills the requirements that we have.

Verification is Static Testing.

Activities involved in verification:

Inspections

Reviews

Walkthroughs

Desk-checking

Validation:

Validation is the process of checking whether the software product is up to the mark
or in other words product has high level requirements. It is the process of checking
the validation of product i.e. it checks what we are developing is the right product. it is
validation of actual and expected product.

Validation is the Dynamic Testing.

Activities involved in validation:

Black box testing

White box testing

Unit testing

Integration testing

Software Testing has different goals and objectives.The major objectives of


Software testing are as follows:

 Finding defects which may get created by the programmer while


developing the software.
 Gaining confidence in and providing information about the level
of quality.
 To prevent defects.
 To make sure that the end result meets the business and user
requirements.
 To ensure that it satisfies the BRS that is Business Requirement
Specification and SRS that is System Requirement Specifications.
 To gain the confidence of the customers by providing them a quality
product.

Software testing helps in finalizing the software application or product


against business and user requirements. It is very important to have good
test coverage in order to test the software application completely and make it
sure that it’s performing well and as per the specifications.

Levels of Testing
In this section, we are going to understand the various levels of software
testing.

As we learned in the earlier section of the software testing tutorial that testing
any application or software, the test engineer needs to follow multiple testing
techniques.

In order to detect an error, we will implement software testing; therefore, all


the errors can be removed to find a product with more excellent quality.
What are the levels of Software Testing?
Testing levels are the procedure for finding the missing areas and avoiding
overlapping and repetition between the development life cycle stages. We
have already seen the various phases such as Requirement collection,
designing, coding testing, deployment, and maintenance of SDLC
(Software Development Life Cycle).

In order to test any application, we need to go through all the above phases
of SDLC. Like SDLC, we have multiple levels of testing, which help us maintain
the quality of the software.

Different Levels of Testing

The levels of software testing involve the different methodologies, which can
be used while we are performing the software testing.

In software testing, we have four different levels of testing, which are as


discussed below:

1. Unit Testing
2. Integration Testing
3. System Testing
4. Acceptance Testing

Software Engineering-Relation b/w People and Effort


SOFTWARE ENGINEERING

In a small software development project a single person can analyze requirements, perform
design, generate code, and conduct tests. As the size of a project increases, more people must
become involved. (We can rarely afford the luxury of approaching a ten person-year effort
with one person working for ten years!)

There is a common myth that is still believed by many managers who are responsible for
software development effort: "If we fall behind schedule, we can always add more
programmers and catch up later in the project." Unfortunately, adding people late in a project
often has a disruptive effect on the project, causing schedules to slip even further. The people
who are added must learn the system, and the people who teach them are the same people
who were doing the work. While teaching, no work is done, and the project falls further
behind.

In addition to the time it takes to learn the system, more people increase the number of
communication paths and the complexity of communication throughout a project. Although
communication is absolutely essential to successful software development, every new
communication path requires additional effort and therefore additional time.

Extreme programming (XP) is one of the most important software development frameworks of
Agile models. It is used to improve software quality and responsiveness to customer
requirements. The extreme programming model recommends taking the best practices that have
worked well in the past in program development projects to extreme levels. Good practices need
to be practiced in extreme programming: Some of the good practices that have been recognized
in the extreme programming model and suggested to maximize their use are given below:

Code Review: Code review detects and corrects errors efficiently. It suggests pair programming as
coding and reviewing of written code carried out by a pair of programmers who switch their
works between them every hour.

Testing: Testing code helps to remove errors and improves its reliability. XP suggests test-driven
development (TDD) to continually write and execute test cases. In the TDD approach test cases
are written even before any code is written.

Incremental development: Incremental development is very good because customer feedback is


gained and based on this development team comes up with new increments every few days after
each iteration.

Simplicity: Simplicity makes it easier to develop good quality code as well as to test and debug it.

Design: Good quality design is important to develop good quality software. So, everybody should
design daily.

Integration testing: It helps to identify bugs at the interfaces of different functionalities. Extreme
programming suggests that the developers should achieve continuous integration by building and
performing integration testing several times a day.
Agile Model
The meaning of Agile is swift or versatile."Agile process model" refers to a
software development approach based on iterative development. Agile
methods break tasks into smaller iterations, or parts do not directly involve
long term planning. The project scope and requirements are laid down at the
beginning of the development process. Plans regarding the number of
iterations, the duration and the scope of each iteration are clearly defined in
advance.

Each iteration is considered as a short time "frame" in the Agile process


model, which typically lasts from one to four weeks. The division of the entire
project into smaller parts helps to minimize the project risk and to reduce the
overall project delivery time requirements. Each iteration involves a team
working through a full software development life cycle including planning,
requirements analysis, design, coding, and testing before a working product is
demonstrated to the client.

The Make or Buy Decision


Previous Page
Next Page

Introduction
Are you outsourcing enough? This was one of the main questions asked by
management consultants during the outsourcing boom. Outsourcing was
viewed as one of the best ways of getting things done for a fraction of the
original cost.
Outsourcing is closely related to make or buy decision. The corporations
made decisions on what to make internally and what to buy from outside in
order to maximize the profit margins.
As a result of this, the organizational functions were divided into segments
and some of those functions were outsourced to expert companies, who can
do the same job for much less cost.
Make or buy decision is always a valid concept in business. No organization
should attempt to make something by their own, when they stand the
opportunity to buy the same for much less price.
This is why most of the electronic items manufactured and software systems
developed in the Asia, on behalf of the organizations in the USA and Europe.

Four Numbers You Should Know


When you are supposed to make a make-or-buy decision, there are four
numbers you need to be aware of. Your decision will be based on the values
of these four numbers. Let's have a look at the numbers now. They are quite
self-explanatory.

 The volume
 The fixed cost of making
 Per-unit direct cost when making
 Per-unit cost when buying

Software Testing Principles


Software testing is a procedure of implementing software or the application to
identify the defects or bugs. For testing an application or software, we need to
follow some principles to make our product defects free, and that also helps
the test engineers to test the software with their effort and time. Here, in this
section, we are going to learn about the seven essential principles of software
testing.

Let us see the seven different testing principles, one by one:

o Testing shows the presence of defects


o Exhaustive Testing is not possible
o Early Testing
o Defect Clustering
o Pesticide Paradox
o Testing is context-dependent
o Absence of errors fallacy
Testing shows the presence of defects

The test engineer will test the application to make sure that the application is
bug or defects free. While doing testing, we can only identify that the
application or software has any errors. The primary purpose of doing testing is
to identify the numbers of unknown bugs with the help of various methods
and testing techniques because the entire test should be traceable to the
customer requirement, which means that to find any defects that might cause
the product failure to meet the client's needs.

By doing testing on any application, we can decrease the number of bugs,


which does not mean that the application is defect-free because sometimes
the software seems to be bug-free while performing multiple types of testing
on it. But at the time of deployment in the production server, if the end-user
encounters those bugs which are not found in the testing process.

Exhaustive Testing is not possible


Sometimes it seems to be very hard to test all the modules and their features
with effective and non- effective combinations of the inputs data throughout
the actual testing process.

Hence, instead of performing the exhaustive testing as it takes boundless


determinations and most of the hard work is unsuccessful. So we can
complete this type of variations according to the importance of the modules
because the product timelines will not permit us to perform such type of
testing scenarios.

Early Testing

Here early testing means that all the testing activities should start in the early
stages of the software development life cycle's requirement analysis stage to
identify the defects because if we find the bugs at an early stage, it will be
fixed in the initial stage itself, which may cost us very less as compared to
those which are identified in the future phase of the testing process.

To perform testing, we will require the requirement specification documents;


therefore, if the requirements are defined incorrectly, then it can be fixed
directly rather than fixing them in another stage, which could be the
development phase.

Defect clustering

The defect clustering defined that throughout the testing process, we can
detect the numbers of bugs which are correlated to a small number of
modules. We have various reasons for this, such as the modules could be
complicated; the coding part may be complex, and so on.

These types of software or the application will follow the Pareto Principle,
which states that we can identify that approx. Eighty percent of the
complication is present in 20 percent of the modules. With the help of this, we
can find the uncertain modules, but this method has its difficulties if the same
tests are performing regularly, hence the same test will not able to identify the
new defects.

Pesticide paradox

This principle defined that if we are executing the same set of test cases again
and again over a particular time, then these kinds of the test will not be able
to find the new bugs in the software or the application. To get over these
pesticide paradoxes, it is very significant to review all the test cases frequently.
And the new and different tests are necessary to be written for the
implementation of multiple parts of the application or the software, which
helps us to find more bugs.

Testing is context-dependent

Testing is a context-dependent principle states that we have multiple fields


such as e-commerce websites, commercial websites, and so on are available in
the market. There is a definite way to test the commercial site as well as the e-
commerce websites because every application has its own needs, features,
and functionality. To check this type of application, we will take the help of
various kinds of testing, different technique, approaches, and multiple
methods. Therefore, the testing depends on the context of the application.

Absence of errors fallacy

Once the application is completely tested and there are no bugs identified
before the release, so we can say that the application is 99 percent bug-free.
But there is the chance when the application is tested beside the incorrect
requirements, identified the flaws, and fixed them on a given period would not
help as testing is done on the wrong specification, which does not apply to
the client's requirements. The absence of error fallacy means identifying and
fixing the bugs would not help if the application is impractical and not able to
accomplish the client's requirements and needs.

Software Engineering Institute


Capability Maturity Model
(SEICMM)
The Capability Maturity Model (CMM) is a procedure used to develop and
refine an organization's software development process.

The model defines a five-level evolutionary stage of increasingly organized


and consistently more mature processes.

CMM was developed and is promoted by the Software Engineering Institute


(SEI), a research and development center promote by the U.S. Department of
Defense (DOD).

Capability Maturity Model is used as a benchmark to measure the maturity of


an organization's software process.
Methods of SEICMM
There are two methods of SEICMM:

Capability Evaluation: Capability evaluation provides a way to assess the


software process capability of an organization. The results of capability
evaluation indicate the likely contractor performance if the contractor is
awarded a work. Therefore, the results of the software process capability
assessment can be used to select a contractor.

Software Process Assessment: Software process assessment is used by an


organization to improve its process capability. Thus, this type of evaluation is
for purely internal use.

SEI CMM categorized software development industries into the following five
maturity levels. The various levels of SEI CMM have been designed so that it is
easy for an organization to build its quality system starting from scratch
slowly.
White Box testing
The term 'white box' is used because of the internal perspective of the system.
The clear box or white box, or transparent box name denotes the ability to
see through the software's outer shell into its inner workings.

It is performed by Developers, and then the software will be sent to the


testing team, where they perform black-box testing. The main objective of
white-box testing is to test the application's infrastructure. It is done at lower
levels, as it includes unit testing and integration testing. It requires
programming knowledge, as it majorly focuses on code structure, paths,
conditions, and branches of a program or software. The primary goal of white-
box testing is to focus on the flow of inputs and outputs through the software
and strengthening the security of the software.

It is also known as structural testing, clear box testing, code-based testing,


and transparent testing. It is well suitable and recommended for algorithm
testing.

To read more about white box testing, you can refer to the following link
– White box testing.

Black Box testing


The primary source of black-box testing is a specification of requirements that
are stated by the customer. It is another type of manual testing. It is a
software testing technique that examines the functionality of the software
without knowing its internal structure or coding. It does not require
programming knowledge of the software. All test cases are designed by
considering the input and output of a particular function. In this testing, the
test engineer analyzes the software against requirements, identifies the
defects or bugs, and sends it back to the development team.

In this method, the tester selects a function and gives input value to examine
its functionality, and checks whether the function is giving the expected
output or not. If the function produces the correct output, then it is passed in
testing, otherwise failed.

Black box testing is less exhaustive than White Box and Grey Box testing
methods. It is the least time-consuming process among all the testing
processes. The main objective of implementing black box testing is to specify
the business needs or the customer's requirements.

In other words, we can say that black box testing is a process of checking the
functionality of an application as per the customer's requirement. Mainly,
there are three types of black-box testing: functional testing, Non-
Functional testing, and Regression testing. Its main objective is to specify
the business needs or the customer's requirements.

You might also like