You are on page 1of 85

Module 1 - Introduction To Software Engineering and Process Models

Write suitable applications of different software models.

https://www.ques10.com/p/8317/write-suitable-applications-of-different-softwar-1/

1. Waterfall model

● This model is used only when the requirements are very well known, clear and
fixed.
● Product definition is stable.
● Technology is understood.
● There are no ambiguous requirements
● Ample resources with required expertise are available freely
● The project is short.

2. V-model

● The V-shaped model should be used for small to medium sized projects where
requirements are clearly defined and fixed.
● The V-Shaped model should be chosen when ample technical resources are
available with needed technical expertise.

3. Incremental model

● This model can be used when the requirements of the complete system are
clearly defined and understood.
● Major requirements must be defined; however, some details can evolve with
time.
● There is a need to get a product to the market early.
● A new technology is being used
● Resources with needed skill set are not available
● There are some high risk features and goals.

4. RAD model

● RAD should be used when there is a need to create a system that can be
modularized in 2-3 months of time.

● It should be used if there’s high availability of designers for modelling and the
budget is high enough to afford their cost along with the cost of automated code
generating tools.
● RAD SDLC model should be chosen only if resources with high business
knowledge are available and there is a need to produce the system in a short
span of time (2-3 months).

5. Agile model

● When new changes need to be implemented. The freedom agile gives to change
is very important. New changes can be implemented at very little cost because of
the frequency of new increments that are produced.

● To implement a new feature the developers need to lose only the work of a few
days, or even only hours, to roll back and implement it.

6. iterative model

● Requirements of the complete system are clearly defined and understood.

● When the project is big.

● Major requirements must be defined; however, some details can evolve with
time.

7. Spiral model

● When costs and risk evaluation is important


● For medium to high-risk projects
● Long-term project commitment unwise because of potential changes to
economic priorities
● Users are unsure of their needs
● Requirements are complex
● New product line
● Significant changes are expected (research and exploration)

8. Prototype model

● Prototype model should be used when the desired system needs to have a lot of
interaction with the end users.
● Typically, online systems, web interfaces have a very high amount of interaction
with end users, and are best suited for the Prototype model. It might take a while
for a system to be built that allows ease of use and needs minimal training for
the end user.
What is Agile Methodology? Explain it with the principles used and
give example of 1 such software model

https://www.ques10.com/p/8324/what-is-agile-methodology-explain-it-with-the-pr-1/

● Agile software development describes an approach to software development


under which requirements and solutions evolve through the collaborative effort
of self-organising and cross-functional teams and their customers.
● It advocates adaptive planning, evolutionary development, early delivery and
continual improvement and it encourages rapid and flexible response to change.
● The term Agile was popularised, in this context, by the Manifesto for Agile
Software Development.
● The values and principles adopted in this manifesto were derived from and
underpin a broad range of software development frameworks, including Scrum
and Kanban.
● There is significant subjective evidence that adopting agile practices and values
improves the agility of software professionals, teams and organisations.

● Agile Principles

1. Customer satisfaction by early and continuous delivery of valuable


software.
2. Welcome changing requirements, even in late development..
3. Working software is delivered frequently.
4. Close, daily cooperating between business people and developers.
5. Projects are built around motivated individuals, who should be trusted.
6. Face to face conversation is the best form of communication.
7. Working software is the primary measure of progress.
8. Sustainable development, able to maintain a constant pace.
9. Continuous attention to technical excellence and good design.
10. Simplicity is essential.
11. Best architectures, requirements and designs emerge from
self-organising teams.
12. Regularly, the team reflects on how to become more effective and adjusts
accordingly

● Example
○ Extreme programming (XP) is one of the best known agile processes. The
extreme programming approach was suggested by Kent Beck in 2000.
Extreme programming is explained as follows

■ Customer specifies and prioritises the system requirements.


Customers become one of the important members of the
development team. The developer and customer together prepare
a story-card in which customer needs are mentioned.
■ The developer team then aims to implement the scenarios in
story-card.
■ After developing the story-card the development team breaks
down the total work in small tasks. The efforts and the estimated
resources required for these tasks are estimated.
■ The customer prioritises the stories for implementation. If the
requirement changes then sometimes unimplemented stories
have to be discarded. Then release the complete software in small
and frequent releases.
■ For accommodating, new changes, new story-cards must be
developed.
■ Evaluate the system along with the customer. Process is
demonstrated by the following fig. Extreme programming release
cycle

Explain the process of CMM


https://www.ques10.com/p/17062/explain-the-process-of-cmm/

Capability Maturity Model is a bench-mark for measuring the maturity of an


organisation’s software process. It is a methodology used to develop and refine an
organisation’s software development process. CMM can be used to assess an
organisation against a scale of five process maturity levels based on certain Key Process
Areas (KPA). It describes the maturity of the company based upon the project the
company is dealing with and the clients. Each level ranks the organisation according to
its standardisation of processes in the subject area being assessed.

A maturity model provides:

1. A place to start
2. The benefit of a community’s prior experiences
3. A common language and a shared vision
4. A framework for prioritising actions
5. A way to define what improvement means for your organisation

In CMMI models with a staged representation, there are five maturity levels
designated by the numbers 1 through 5 as shown below:

1. Initial
2. Managed
3. Defined
4. Quantitatively Managed
5. Optimising

Maturity levels consist of a predefined set of process areas. The maturity levels are
measured by the achievement of the specific and generic goals that apply to each
predefined set of process areas. The following sections describe the characteristics of
each maturity level in detail.

Maturity Level 1 – Initial:


Company has no standard process for software development. Nor does it have a
project-tracking system that enables developers to predict costs or finish dates with any
accuracy.

In detail we can describe it as given below:

● At maturity level 1, processes are usually ad hoc and chaotic.

● The organisation usually does not provide a stable environment. Success in these
organisations depends on the competence and heroics of the people in the
organisation and not on the use of proven processes.

● Maturity level 1 organisations often produce products and services that work but
the company has no standard process for software development. Nor does it
have a project-tracking system that enables developers to predict costs or finish
dates with any accuracy.

Maturity Level 2 – Managed:

Company has installed basic software management processes and controls. But there is
no consistency or coordination among different groups.

In detail we can describe it as given below:

● At maturity level 2, an organisation has achieved all the specific and generic goals
of the maturity level 2 process areas. In other words, the projects of the
organisation have ensured that requirements are managed and that processes
are planned, performed, measured, and controlled.

● The process discipline reflected by maturity level 2 helps to ensure that existing
practices are retained during times of stress. When these practices are in place,
projects are performed and managed according to their documented plans.

● At maturity level 2, requirements, processes, work products, and services are


managed. The status of the work products and the delivery of services are visible
to management at defined points.

Maturity Level 3 – Defined:

Company has pulled together a standard set of processes and controls for the entire
organisation so that developers can move between projects more easily and customers
can begin to get consistency from different groups.

In detail we can describe it as given below:


● At maturity level 3, an organisation has achieved all the specific and generic
goals.

● At maturity level 3, processes are well characterised and understood, and are
described in standards, procedures, tools, and methods.

● A critical distinction between maturity level 2 and maturity level 3 is the scope of
standards, process descriptions, and procedures. At maturity level 2, the
standards, process descriptions, and procedures may be quite different in each
specific instance of the process (for example, on a particular project). At maturity
level 3, the standards, process descriptions, and procedures for a project are
tailored from the organisation’s set of standard processes to suit a particular
project or organisational unit.

Maturity Level 4 – Quantitatively Managed:

In addition to implementing standard processes, the company has installed systems to


measure the quality of those processes across all projects.

In detail we can describe it as given below:

● At maturity level 4, an organisation has achieved all the specific goals of the
process areas assigned to maturity levels 2, 3, and 4 and the generic goals
assigned to maturity levels 2 and 3.

● At maturity level 4 Sub-processes are selected that significantly contribute to


overall process performance. These selected sub-processes are controlled using
statistical and other quantitative techniques.

● Quantitative objectives for quality and process performance are established and
used as criteria in managing processes. Quantitative objectives are based on the
needs of the customer, end users, organisation, and process implementers.
Quality and process performance are understood in statistical terms and are
managed throughout the life of the processes.

Maturity Level 5 – Optimising:

Company has accomplished all of the above and can now begin to see patterns in
performance over time, so it can tweak its processes in order to improve productivity
and reduce defects in software development across the entire organisation.

In detail we can describe it as given below:


● At maturity level 5, an organisation has achieved all the specific goals of the
process areas assigned to maturity levels 2, 3, 4, and 5 and the generic goals
assigned to maturity levels 2 and 3.

● Processes are continually improved based on a quantitative understanding of


the common causes of variation inherent in processes.

● Maturity level 5 focuses on continually improving process performance through


both incremental and innovative technological improvements.

● Quantitative process-improvement objectives for the organisation are


established, continually revised to reflect changing business objectives, and used
as criteria in managing process improvement.

● The effects of deployed process improvements are measured and evaluated


against the quantitative process-improvement objectives. Both the defined
processes and the organisation’s set of standard processes are targets of
measurable improvement activities.

● Optimising processes that are agile and innovative depends on the participation
of an empowered workforce aligned with the business values and objectives of
the organisation.

● The organisation’s ability to rapidly respond to changes and opportunities is


enhanced by finding ways to accelerate and share learning. Improvement of the
processes is inherently part of everybody’s role, resulting in a cycle of continual
improvement.

Agile process and its advantages? Explain any one Agile process

https://www.ques10.com/p/17108/agile-process-and-its-advantagesexplain-any-one--1/

● Agile SDLC model is a combination of iterative and incremental process models


with focus on process adaptability and customer satisfaction by rapid delivery of
working software products.

● Agile Methods break the product into small incremental builds. These builds are
provided in iterations. Each iteration typically lasts from about one to three
weeks. Every iteration involves cross functional teams working simultaneously on
various areas like planning, requirements analysis, design, coding, unit testing,
and acceptance testing.
● At the end of the iteration a working product is displayed to the customer and
important stakeholders.

● The Agile model believes that every project needs to be handled differently and
the existing methods need to be tailored to best suit the project requirements. In
agile the tasks are divided into time boxes (small time frames) to deliver specific
features for a release.

● Iterative approach is taken and working software build is delivered after each
iteration. Each build is incremental in terms of features; the final build holds all
the features required by the customer.

● Agile thought process had started early in the software development and started
becoming popular with time due to its flexibility and adaptability.

Advantages

● Customer satisfaction by rapid, continuous delivery of useful software.


● People and interactions are emphasised rather than process and tools.
Customers, developers and testers constantly interact with each other.
● Working software is delivered frequently (weeks rather than months).
● Face-to-face conversation is the best form of communication.
● Close, daily cooperation between business people and developers.
● Continuous attention to technical excellence and good design.
● Regular adaptation to changing circumstances.
● Even late changes in requirements are welcomed.
Dynamic Systems Development Method

● Provides a framework for building and maintaining systems which meet tight
time constraints using incremental prototyping in a controlled environment
● Uses Pareto principle (80% of project can be delivered in 20% required to deliver
the entire project)
● Each increment only delivers enough functionality to move to the next increment
● Uses time boxes to fix time and resources to determine how much functionality
will be delivered in each increment

● Guiding principles

○ Active user involvement

○ Teams empowered to make decisions


○ Fitness for business purpose is criterion for deliverable acceptance
○ Iterative and incremental develop needed to converge on accurate
business solution
○ All changes made during development are reversible
○ Requirements are baselined at a high level
○ Testing integrates throughout life-cycle
○ Collaborative and cooperative approach between stakeholders

● Life cycle activities

○ Feasibility study (establishes requirements and constraints)


○ Business study (establishes functional and information requirements
needed to provide business value)
○ Functional model iteration (produces set of incremental prototypes to
demonstrate functionality to customer)
○ Design and build iteration (revisits prototypes to ensure they provide
business value for end users, may occur concurrently with functional
model iteration)
○ Implementation (latest iteration placed in operational environment)

Define Software Engineering. Explain in brief the software process


framework.
https://www.ques10.com/p/8329/define-software-engineering-explain-in-brief-the-1/

Software Engineering
● Software engineering is a discipline in which theories, methods and tools are
applied to develop professional software products.
● The definition of software engineering is based on two terms:
○ Discipline
■ For finding a solution to the problem an Engineer applies
appropriate theories, methods and tools. While finding the
solutions, Engineers must think of the organisational and financial
constraints. Within these constraints only he/she has to find the
solution
○ Product
The software product gets developed after following systematic theories,
methods and tools along with the appropriate management activities.
■ Software Engineering is a layered technology. Any software can be
developed using these layered approaches.
■ Various layers on which the technology is based are Quality focus
layer, Process layer, methods layer, tools layer.
■ A disciplined quality management is a backbone of software
engineering technology.
■ A process layer is a foundation of software engineering. Basically,
process defines the framework for timely delivery of software.

● In the method layer the actual method of implementation is carried out with the
help of requirement analysis, designing, coding using desired programming
constructs and testing.
● Software tools are used to bring automation in the software development
process.
● Thus, software engineering is a combination of process, methods and tools for
development of quality software

Software Process Framework


● A process framework establishes the foundation for a complete software process
by identifying a small number of framework activities that are applicable to all
software projects, regardless of their size or complexity.
● In addition, the process framework encompasses a set of activities that are
applicable across the entire software process.
● Referring the following figure, each framework activity is populated by an asset
of software engineering actions- a collection of related tasks that produces a
major software engineering work product (e.g. design is a software engineering
action).
● Each action is populated with individual work tasks that accomplish some part of
the work implied by the action.

● The following generic process (used as a basis for the description of process
models in subsequent chapters) is applicable to the vast majority of software
projects:
1. Communication
a. This framework activity involves heavy communication and
collaboration with the customer (and other stakeholders) and
encompasses requirements gathering and other related activities.
2. Planning
a. This activity establishes a plan for the software engineering work
that follows. It describes the technical tasks to be conducted, the
risks that are likely, the resources that will be required, the work
products to be produced, and a work schedule.
3. Modelling
a. This activity encompasses the creation of models for the developer
and the customer to better understand software requirements
and the design that will achieve those requirements.
4. Construction
a. This activity combines code generation (either manual or
automated) and the testing that is required to uncover errors in
the code.
5. Deployment
a. The software (as a complete entity or as a partially completed
increment) is delivered to the customers who evaluate the
delivered product and provide feedback based on the evaluation.
● These five generic framework activities can be used during the development of
small programs, the creation of large web applications, and for the engineering
of large, complex computer based systems. The details of the software process
will be quite different in each case, but the framework activities remain the
same.

What is Agility in the context of software engineering? Explain


Extreme Programming (XP) with suitable diagrams.
https://www.ques10.com/p/8333/what-is-agility-in-context-of-software-engineeri-1/

Agility in context of software engineering

● Agility means effective (rapid and adaptive) response to change, effective


communication among all stockholders.
● Drawing the customer onto a team and organising a team so that it is in control
of work performed. -The Agile process, light-weight methods are People-based
rather than plan-based methods.
● The agile process forces the development team to focus on software itself rather
than design and documentation.
● The agile process believes in iterative methods.
● The aim of agile process is to deliver the working model of software quickly to
the customer For example: Extreme programming is the best known of agile
process.

Extreme Programming

● Extreme programming uses an object-oriented approach as its preferred


development paradigm.
● Extreme programming encompasses a set of rules and practices that occur
within the context of four framework activities: planning, design, coding, and
testing.

1. Planning:

● The planning activity begins with the creation of a set of stories that describe
required features and functionality for software to be built.
● Each story is written by the customer and is placed on an index card. The
customer assigns a value to the story based on the overall business value of the
feature of function.
● Members of the XP (Extreme Programming) team then assess each story and
assign a cost – measured in development weeks – to it.
● If the story will require more than three development weeks, the customer is
asked to split the story into smaller stories, and the assignment of value and cost
occurs again.
● Customers and the XP team work together to decide how to group stories into
the next release to be developed by the XP team.
● Once a basic commitment is made for a release, the XP team orders the stories
that will be developed in one of three ways:
1. All stories will be implemented immediately.
2. The stories with highest value will be moved up in the schedule and
implemented first.
3. The riskiest stories will be moved up in the schedule and implemented first.
● As development work proceeds, the customer can add stories, change the value
of an existing story, split stories or eliminate them.
● The XP team then reconsiders all remaining releases and modifies its plan
accordingly.

2. Design :

● XP design follows the KIS (Keep It Simple) principle. A simple design is always
preferred over a more complex representation.
● The design provides implementation guidance for a story as it is written –
nothing less, nothing more.
● XP encourages the use of CRC (Class Responsibility Collaborator) cards as an
effective mechanism for thinking about the software in an object oriented
context.
● CRC cards identify and organise the object oriented classes that are relevant to
current software increment.
● The CRC cards are the only design work product produced as a part of XP
process.
● If a difficult design is encountered as a part of the design of a story, XP
recommends the immediate creation of that portion of the design called ‘spike
solution’.
● XP encourages refactoring – a construction technique.

3. Coding

● XP recommends that after stories are developed and preliminary design work is
done, the team should not move to cord, but rather develop a series of unit test
that will exercise each story.
● Once the unit test has been created, the developer is better able to focus on
what must be implemented to pass the unit test.
● Once the code completes, it can be unit tested immediately, thereby providing
instantaneous feedback to the developer.
● A key concept during the coding activity is pair programming. XP recommends
that two people work together at one computer workstation to create code for a
story. This provides a mechanism for real time problem solving and real time
quality assurance.
● As pair programmers complete their work, the code they developed is integrated
with the work of others.
● This continuous integration strategy helps to avoid compatibility and interfacing
problems and provides a smoke testing environment that helps to uncover
errors early.

4. Testing :

● The creation of unit tests before coding is the key element of the XP approach.
● The unit tests that are created should be implemented using a framework that
enables them to be automated. This encourages regression testing strategy
whenever code is modified.
● Individual unit tests are organised into a “Universal Testing Suit”, integration and
validation testing of the system can occur on a daily basis. This provides the XP
team with a continual indication of progress and also can raise warning flags
early if things are going away.
● XP acceptance tests, also called customer tests, are specified by the customer
and focus on the overall system feature and functionality that are visible and
reviewable by the customer.
Module 2 - Software Requirements Analysis and Modeling

Q.1 a) What is an SRS document? Build an SRS Document for an online Student
feedback system.
Ans.
a)
SRS stands for Software Requirement Specification. An SRS document is an
organization’s understanding of a customer/potential client’s system requirement
and dependencies. It gives a detailed information about the system by including
several aspects related to the software. It includes Functionality, external interfaces,
performance, attributes, and design constraints.

i) Functionality answers questions such as – What the software is supposed to


do?
ii) External interfaces answers questions such as – How does the software
interact with people, response time, recovery time of various software functions, etc.
iii) Performance answers questions such as – What is the speed, availability,
response time, recovery time of various software functions
iv) Attributes answers questions such as – What are the probabilities,
correctness, maintainability, security, consideration?
v) Design constraints answers questions such as – Are there any standards in
effect, implementation, policies of database integrity, resource limits, operating
environments, etc.

The outcome of requirement gathering and analysis phase is the SRS. It is the base
for all software engineering actions and is generated only when all the requirements
are correctly gathered.

An outline of the SRS is as follows:

1. Introduction
a. Document purpose
b. Product Scope
c. Intended audience, & Document overview
d. Definitions, acronyms, abbreviations
e. Document conventions
f. Refences and acknowledgements.

2. Overall description
a. Product perspective
b. Product functionality
c. Users and characteristics
d. Operating environments
e. Design and Implementation
f. User documentation
g. Assumptions and dependencies

3. Specific Requirements
a. External interface requirements
b. Functional requirements
c. Behavioural requirements

4. Other Non-functional requirements


a. Performance requirement
b. Safety and security
c. Software quality assurance

5. Other requirements
Appendix – A Data Dictionary

Q.2 ) SRS for Online Student Feedback System

1. Introduction
a. Document purpose = The document aims to provide details to build an Online
Feedback System for students.

b. Product Scope = This system will allow students to give their feedback
regarding subjects taught in college. This feedback system will be provided by their
professors and students’ feedback will be accessible to the HoD’s and to the college
Principle.

c. Intended audience, & Document overview = The system will be utilized by


college students belonging to different branches, and by the Heads of Department
and Principle to keep a track on the responses.

d. Definitions, acronyms, abbreviations =


HoD = Head of Department

e. Document conventions = The typography followed here was Font Style: Arial,
Font size: 13, Titles are represented in bold and subtitles have font-size: 13
f. Refences and acknowledgements =

2. Overall description
a. Product perspective = This system aims to create an application that will
allow students to add individual feedback, either anonymously or by their roll
numbers, thus protecting their identity. It will also allow the college authorities to
view students’ responses corresponding to a course and its instructor.
b. Product functionality =
i) Schools
ii) Educational institutions
iii) Universities
iv) Coaching institutes
c. Users and characteristics = Common users of this product are as follows
i) School/College students
ii) Hostel Students
iii) College authorities
iv) Coaching institutes

d. Operating environments =
The software can operate on any system with java installed, it could be
1. Windows
2. Mac OS
3. Linux
4. A code editor such as VS Code, IntelliJ Idea

3. Functional Requirements:
3. 1 Description:
For identity of the staff, the system should display staff photograph, along with their
names, corresponding subjects, and other information.

3. Operations Scenario
There should be a students and a teacher’s database. The student database should
contain name and corresponding feedback.

4. Preliminary Schedule:
The system was designed in 6 months.
2022

Q.3 SRS for Hospital Management System


2016
Q.4 SRS for given scenarios

Product perspective:

This product will be consisting of departments sections, instructors’ section,


student enrolment section. Here departments will be able to create subjects
which they offer, add all instructors belonging to their department. Students
should be able to enrol for at most 5 subjects. Instructors should be able to
select departments and the subjects they wish to teach, with the limitation of
teaching up to 3 subjects.

Scope and objectives: = The system will allow the department to handle its
enrolment, subject provisions and allotments, teachers should be able to select
the subjects to teach of their interest, and students should be capable of
enrolling for a subject.

Functional requirements =

1. Log in Module (LM)User (department, student, and teachers) shall be


able to load the Login Module in the internet browser. The LM shall support
the user to log into the system. The login panel shall contain fields to
contain a user name and a field for password. The password field shall be
masked with symbols when the user types. It shall also contain a button
labelled as Login. When the user clicks on Login button the username and
password will be verified by database administrator and then only the user
will able to use the system functions.
2. Registered Users Module (RUM)After successful login, user shall be
able to continue navigating through the website and view school/college
detailed information. After successful login, user (department, student, and
teachers) shall be able to update and maintain their profile, such as
changing password and personal details and information to be displayed.

Non-functional requirements=

These may include

a. Performance requirement
b. Safety and security
c. Software quality assurance

a. Under performance requirement, department should be notified as soon


as a teacher selects a subject to teach, it should be fast enough to avoid 2
teachers selecting the same subject when the limit has passed.

b. Safety and security: The system should make sure that not more than 1
department is teaching the same subject. It should also ensure that a
student does not register as a teacher in the system.

c. Software quality assurance: The courses and teaching faculty under a


department should be updated from time to time.
Module 3 - Software Estimation Metrics

1)What is cost estimation ? Describe LOC method.


For any new software project, it is necessary to know how much it will cost to develop and
how much development time will it take. Several estimation procedures have been
developed and are having the following attributes in common.

1. Project scope must be established in advanced.


2. Software metrics are used as a support from which evaluation is made.
3. The project is broken into small PCs which are estimated individually.
To achieve true cost & schedule estimate, several option arise.
4. Delay estimation

Uses of Cost Estimation


1. During the planning stage, one needs to choose how many engineers are
required for the project and to develop a schedule.
2. In monitoring the project's progress, one needs to access whether the project
is progressing according to the procedure and takes corrective action, if
necessary.

LINES OF CODE-
Lines of code is basically a method in which we count the number of lines present the code
in order to estimate the size of the software.
Things we count in lines of code-
Declaration
Actual code
Logic and computation

We don't count -
1) blank lines
2) commented lines

They are not included in the lines of code still they are written the code because-
1) blank lines-to increase readability
2)commented lines-to understand the functionality of code and for maintenance purposes.

The size of the software basically shows the amount of effort that is given and writing the
code. But counting the blank and commented lines, they give a false impression about the
productivity of the software.
KLOC- Thousand lines of code

● NLOC- Non-comment lines of code

● KDSI- Thousands of delivered source instruction

The size is estimated by comparing it with the existing systems of the same kind. The experts use it
to predict the required size of various components of software and then add them to get the total
size.

It’s tough to estimate LOC by analysing the problem definition. Only after the whole code has been
developed can accurate LOC be estimated. This statistic is of little utility to project managers
because project planning must be completed before development activity can begin.

Two separate source files having a similar number of lines may not require the same effort. A file
with complicated logic would take longer to create than one with simple logic. Proper estimation
may not be attainable based on LOC. The length of time it takes to solve an issue is measured in LOC.
This statistic will differ greatly from one programmer to the next. A seasoned programmer can write
the same logic in fewer lines than a newbie coder.

Advantages: Universally accepted and is used in many models like COCOMO.

● Estimation is closer to the developer’s perspective.

● Simple to use.

Disadvantages: Different programming languages contain a different number of lines.

● No proper industry standard exists for this technique.

● It is difficult to estimate the size using this technique in the early stages of the project.

2.Explain cocomo model-


• COCOMO-II is the revised version of the original Cocomo (Constructive Cost
Model) and is developed at University of Southern California.
• It is the model that allows one to estimate the cost, effort and schedule when
planning a new software development activity.
It consists of three sub-models:

• 1. End User Programming:


• Application generators are used in this sub-model. End user write the code by
using these application generators.
Example – Spreadsheets, report generator, etc.

• 2(a). Application Generators and Composition Aids –


This category will create largely prepackaged capabilities for user programming. Their
product will have many reusable components. Typical firms operating in this sector
are Microsoft, Lotus,
Oracle, IBM, Borland, Novell.
• 2(b). Application Composition Sector –
This category is too diversified and to be handled by prepackaged solutions. It
includes GUI, Databases, domain specific components such as financial, medical or
industrial process control packages.
• 2(c). System Integration –
This category deals with large scale and highly embedded systems.
• 3. Infrastructure Sector:
This category provides infrastructure for the software development like Operating
System, Database Management System, User Interface Management System,
Networking System, etc.

• Stage-I:
It supports estimation of prototyping. For this it uses Application Composition
Estimation Model. This model is used for the prototyping stage of application
generator and system integration.
• Stage-II:
It supports estimation in the early design stage of the project, when we less know
about it. For this it uses Early Design Estimation Model. This model is used in early
design stage of application generators, infrastructure, system integration.
• Stage-III:
It supports estimation in the post architecture stage of a project. For this it uses Post
Architecture Estimation Model. This model is used after the completion of the detailed
architecture of application generator, infrastructure, system integration.
• Application Composition Estimation Model allows one to estimate the cost, effort
at the stage 1 of the COCOMO II Model.
• In this model size is first estimated using Object Points.
• Object Points are easy to identify and count. Object Points defines screen, reports,
third generation (3GL) modules as objects.
• Object Point estimation is a new size estimation technique but it is well suited

3.Explain different size estimation metrics with their advantages and


disadvantages.
Estimation of the size of the software is an essential part of Software Project Management. It helps
the project manager to further predict the effort and time which will be needed to build the project.
Various measures are used in project size estimation. Some of these are:
● Lines of Code
● Number of entities in ER diagram
● Total number of processes in detailed data flow diagram
● Function points
1. Lines of Code (LOC): As the name suggests, LOC counts the total number of lines of source code in
a project.
The units of LOC are:
KLOC- Thousand lines of code
● NLOC- Non-comment lines of code
● KDSI- Thousands of delivered source instruction
The size is estimated by comparing it with the existing systems of the same kind. The experts use it
to predict the required size of various components of software and then add them to get the total
size. It’s tough to estimate LOC by analysing the problem definition. Only after the whole code has
been developed can accurate LOC be estimated. This statistic is of little utility to project managers
because project planning must be completed before development activity can begin. Two separate
source files having a similar number of lines may not require the same effort. A file with complicated
logic would take longer to create than one with simple logic. Proper estimation may not be
attainable based on LOC. The length of time it takes to solve an issue is measured in LOC. This
statistic will differ greatly from one programmer to the next. A seasoned programmer can write the
same logic in fewer lines than a newbie coder.
Advantages:
Universally accepted and is used in many models like COCOMO.
● Estimation is closer to the developer’s perspective.
● Simple to use.
Disadvantages:
Different programming languages contain a different number of lines.
● No proper industry standard exists for this technique.
● It is difficult to estimate the size using this technique in the early stages of the project.

2. Number of entities in ER diagram: ER model provides a static view of the project. It describes the
entities and their relationships. The number of entities in ER model can be used to measure the
estimation of the size of the project. The number of entities depends on the size of the project. This
is because more entities needed more classes/structures thus leading to more coding.
Advantages:
Size estimation can be done during the initial stages of planning.
● The number of entities is independent of the programming technologies used. Disadvantages: No
fixed standards exist. Some entities contribute more project size than others.
● Just like FPA, it is less used in the cost estimation model. Hence, it must be converted to LOC. 3.
Total number of processes in detailed data flow diagram: Data Flow Diagram(DFD) represents the
functional view of software. The model depicts the main processes/functions involved in software
and the flow of data between them. Utilization of the number of functions in DFD to predict
software size. Already existing processes of similar type are studied and used to estimate the size of
the process. Sum of the estimated size of each process gives the final estimated size.
Advantages:
It is independent of the programming language.
● Each major process can be decomposed into smaller processes. This will increase the accuracy of
estimation
Disadvantages:
Studying similar kinds of processes to estimate size takes additional time and effort.
● All software projects are not required for the construction of DFD
. 4. Function Point Analysis: In this method, the number and type of functions supported by the
software are utilized to find FPC(function point count).
The steps in function point analysis are: Count the number of functions of each proposed type.
● Compute the Unadjusted Function Points(UFP).
● Find Total Degree of Influence(TDI).
● Compute Value Adjustment Factor(VAF).
● Find the Function Point Count(FPC).
SOFTWARE ENGINEERING

MODULE : 4 : SOFTWARE DESIGN

1. Discuss on Modularity and Functional Independence fundamentals


of design concepts.

Modularity
● Software architecture and design pattern embody modularity.
● That is, software is divided into separately named and addressable components,
sometimes called modules that are integrated to satisfy problem requirements.
Creating such modules bring modularity in software.
● Modularity is a “single attribute of software that allows a program to be intellectually
manageable”
● Meyer defines five criteria that enables us to evaluate design method with respect
to its ability to define an effective modular system.
1. Modular decomposability
A design method provides a systematic mechanism for decomposing the problem
into sub-problems. This reduces the complexity of the problem and modularity can
be achieved.
2. Modular composability
A design method enables existing design components to be assembled into a new
system.
3. Modular understandability
A module can be understood as a standalone unit. Then it will be easier to build and
easier to change.
4. Modular continuity
Small changes to the system requirements results in changes to individual modules
rather than system-wide changes.
5. Modular protection
An aberrant condition occurs within a module and its effects are constrained within
the module.
Functional Independence
● The concept of functional independence is a direct outgrowth of modularity and the
concepts of abstraction and information hiding.
● Functional independence is achieved by developing modules with “single-minded”
function and an “aversion” to excessive interaction with other modules.
● Stated in other way, we want to design software so that each module addresses a
specific sub function of requirements and has a simple interface when viewed from
other part of program structure.

1
● Independence is important because, software with effective modularity I.e.
independent modules is easier to develop because function may be
compartmentalized and interfaces are simplified.
● Independent modules are easier to maintain because secondary effects caused by
design or code modification are limited, error progression are reduced, and
reusable modules are possible
● Functional independence is a key to good design, and design is the key to software
quality.
● Independence is assessed using two qualitative criteria
1. Cohesion
Cohesion is an indication of relative function strength of a module. A cohesive
module perform a single task, requiring little interaction with other components in
other parts of program
2. Coupling
Coupling is an indication of the relative interdependence among modules. Coupling
depends on the interface complexity between modules, the point at which entry or
reference is made to module and what data passes across the interface.

2. Explain different architectural styles with suitable brief examples


for each.

Architectural Style:-

An architectural style, sometimes called an architectural pattern, is a set of principles—a


coarse grained pattern that provides an abstract framework for a family of systems. An
architectural style improves partitioning and promotes design reuse by providing solutions
to frequently recurring problems.
Types of Architectural Styles are:-
1) Client/Server Architectural Style:-
The client/server architectural style describes distributed systems that involve a separate
client and server system, and a connecting network. The simplest form of client/server
system involves a server application that is accessed directly by multiple clients, referred
to as a 2-Tier architectural style.
Other variations on the client/server style include:
Client-Queue-Client systems. This approach allows clients to communicate with other
clients through a server-based queue. Clients can read data from and send data to a
server that acts simply as a queue to store the data.
Peer-to-Peer (P2P) applications. Developed from the Client-Queue-Client style, the P2P
style allows the client and server to swap their roles in order to distribute and synchronize
files and information across multiple clients.
The main benefits of the client/server architectural style are:
2
Higher security. All data is stored on the server, which generally offers a greater control
of security than client machines.
Centralized data access. Because data is stored only on the server, access and updates
to the data are far easier to administer than in other architectural styles.
2) Component-Based Architectural Style
Component-based architecture describes a software engineering approach to system
design and development. It focuses on the decomposition of the design into individual
functional or logical components that expose well-defined communication interfaces
containing methods, events, and properties.
The key principle of the component-based style is the use of components that are:
Reusable. Components are usually designed to be reused in different scenarios in
different applications. However, some components may be designed for a specific task.
Replaceable. Components may be readily substituted with other similar components.
Not context specific. Components are designed to operate in different environments and
contexts. Specific information, such as state data, should be passed to the component
instead of being included in or accessed by the component.
Extensible. A component can be extended from existing components to provide new
behavior.
Domain Driven Design Architectural Style
3) Domain Driven Design (DDD) is an object-oriented approach to designing software
based on the business domain, its elements and behaviors, and the relationships between
them. It aims to enable software systems that are a realization of the underlying business
domain by defining a domain model expressed in the language of business domain
experts. The domain model can be viewed as a framework from which solutions can then
be rationalized.
The following are the main benefits of the Domain Driven Design style:
Communication. All parties within a development team can use the domain model and
the entities it defines to communicate business knowledge and requirements using a
common business domain language, without requiring technical jargon.
Extensible. The domain model is often modular and flexible, making it easy to update and
extend as conditions and requirements change.
Testable. The domain model objects are loosely coupled and cohesive, allowing them to
be more easily tested.
4) Layered Architectural Style
Layered architecture focuses on the grouping of related functionality within an application
into distinct layers that are stacked vertically on top of each other. Functionality within each
layer is related by a common role or responsibility. Communication between layers is

3
explicit and loosely coupled. Layering your application appropriately helps to support a
strong separation of concerns that, in turn, supports flexibility and maintainability.
Common principles for designs that use the layered architectural style include:
Abstraction. Layered architecture abstracts the view of the system as whole while
providing enough detail to understand the roles and responsibilities of individual layers and
the relationship between them.
Encapsulation. No assumptions need to be made about data types, methods and
properties, or implementation during design, as these features are not exposed at layer
boundaries.
Reusable. Lower layers have no dependencies on higher layers, potentially allowing them
to be reusable in other scenarios.
Loose coupling. Communication between layers is based on abstraction and events to
provide loose coupling between layers.
5) Message Bus Architectural Style
Message bus architecture describes the principle of using a software system that can
receive and send messages using one or more communication channels, so that
applications can interact without needing to know specific details about each other. It is a
style for designing applications where interaction between applications is accomplished by
passing messages (usually asynchronously) over a common bus.
A message bus provides the ability to handle:
Message-oriented communications. All communication between applications is based
on messages that use known schemas.
Complex processing logic. Complex operations can be executed by combining a set of
smaller operations, each of which supports specific tasks, as part of a multistep itinerary.
Modifications to processing logic. Because interaction with the bus is based on
common schemas and commands, you can insert or remove applications on the bus to
change the logic that is used to process messages.

3. What are the features of a good user Interface? Design and


interface for Online AirTicket Reservation System.
Graphical User Interface
User interface design creates an effective communication medium between a human and
a computer. Software Engineer designs the user interface by applying an interactive
process.
Features of Good User Interface
● Increased efficiency: If the system fits the way its users work and if it has a good
ergonomic design, users can perform their tasks efficiently. They do not lose time
struggling with the functionality and its appearance on the screen.

4
● Improved productivity: A good interface does not distract the user, but rather
allows him to concentrate on the task to be done.
● Reduced Errors: Many so-called 'human errors' can be attributed to poor user
interface quality. Avoiding inconsistencies, ambiguities, and so on, reduces user
errors.
● Reduced Training: A poor user interface hampers learning. A well-designed user
interface encourages its users to create proper models and reinforces learning, thus
reducing training time.
● Improved Acceptance: Users prefer systems whose interface is well-designed.
Such systems make information easy to find and provide the information in a form
which is easy to use.
User Interface Design for Online Air Ticket Reservation System
There are two types of users for the Air Ticket Reservation System. One is the Customer
and the other is the administrator. Both the customer and administrator user interface
would be a Graphical User Interface.
The graphical user interface for the customer home page would be as follows:

Fig 1: represents GUI for Customer

Fig 2: Customer Registration Form


5
Whenever Customer want to book flight he need to be register himself/herself as a user in
this system, so Above fig shows Customer Registration interface which take all necessary
information from user.

Fig 3: Represents searching flight for Booking


As shown in above fig which depicted searching flight for booking. Customer needs to fill
up all required fields for searching flight and also no. of passenger information is
necessary, by clicking user will get available flight for corresponding data given by
customer.
The Graphical User Interface would mainly consist of Hyperlinks, Data entry fields like the
E-mail Id field, pushdown buttons like the Login buttons etc.
The Administrator of the website would also have a similar Graphical User Interface.
After an administrator logs onto the system, the home page for the administrator would be
as follows:

Fig 4: Represents GUI for Administrator

6
4. What is the User Interface design process? Explain with one
example.
User Interface(U) Design:
User Interface (UI) Design focuses on anticipating what users might need to do and
ensuring that the interface has elements that are easy to access, understand, and use to
facilitate those actions.
UI brings together concepts from interaction design, visual design, and information
architecture.
User interface design creates an effective communication medium between a human and
a computer. Software Engineer designs the user interface by applying an interactive
process.
Features of Good User Interface:
● Increased efficiency: If the system fits the way its users work and if it has a good
ergonomic design, users can perform their tasks efficiently. They do not lose time
struggling with the functionality and its appearance on the screen.
● Improved productivity: A good interface does not distract the user, but rather
allows him to concentrate on the task to be done.
● Reduced Errors: Many so-called 'human errors' can be attributed to poor user
interface quality. Avoiding inconsistencies, ambiguities, and so on, reduces user
errors.
● Reduced Training: A poor user interface hampers learning. A well-designed user
interface encourages its users to create proper models and reinforces learning, thus
reducing training time.
User Interface Design for Online Air Ticket Reservation System:
There are two types of users for the Air Ticket Reservation System. One is the Customer
and the other is the administrator. Both the customer and administrator user interface
would be a Graphical User Interface.
The graphical user interface for the customer home page would be as follows:

7
Whenever Customer want to book flight he need to be register himself/herself as a user in
this system, so Above fig shows Customer Registration interface which take all necessary
information from user.

8
As shown in above fig which depicted searching flight for booking. Customer needs to fill
up all required fields for searching flight and also no. of passenger information is
necessary, by clicking user will get available flight for corresponding data given by
customer.
The Graphical User Interface would mainly consist of Hyperlinks, Data entry fields like the
E-mail Id field, pushdown buttons like the Login buttons etc.
The Administrator of the website would also have a similar Graphical User Interface. After
an administrator logs onto the system, the home page for the administrator would be as
follows:

9
What is Coupling and Cohesion? Explain different forms of
it.

Coupling: Coupling is the measure of the degree of interdependence between


the modules. A good software will have low coupling. Following are various
forms of coupling:

● Data Coupling: If the dependency between the modules is based on


the fact that they communicate by passing only data, then the
modules are said to be data coupled. In data coupling, the
components are independent of each other and communicate
through data. Module communications don’t contain tramp data.
Example-customer billing system.
● Stamp Coupling In stamp coupling, the complete data structure is
passed from one module to another module. Therefore, it involves
tramp data. It may be necessary due to efficiency factors- this choice
was made by the insightful designer, not a lazy programmer.
10
● Control Coupling: If the modules communicate by passing control
information, then they are said to be control coupled. It can be bad if
parameters indicate completely different behavior and good if
parameters allow factoring and reuse of functionality. Example- sort
function that takes comparison function as an argument.
● External Coupling: In external coupling, the modules depend on
other modules, external to the software being developed or to a
particular type of hardware. Ex- protocol, external file, device format,
etc.
● Common Coupling: The modules have shared data such as global
data structures. The changes in global data mean tracing back to all
modules which access that data to evaluate the effect of the change.
So it has got disadvantages like difficulty in reusing modules,
reduced ability to control data accesses, and reduced maintainability.
● Content Coupling: In a content coupling, one module can modify
the data of another module, or control flow is passed from one
module to the other module. This is the worst form of coupling and
should be avoided.

Figure 1 Coupling types

11
Cohesion: Cohesion is a measure of the degree to which the elements of the
module are functionally related. It is the degree to which all elements directed
towards performing a single task are contained in the component. Basically,
cohesion is the internal glue that keeps the module together. A good software
design will have high cohesion.

Types of Cohesion:
● Functional Cohesion: Every essential element for a single
computation is contained in the component. A functional cohesion
performs the task and functions. It is an ideal situation.
● Sequential Cohesion: An element outputs some data that becomes
the input for other element, i.e., data flow between the parts. It
occurs naturally in functional programming languages.
● Communicational Cohesion: Two elements operate on the same
input data or contribute towards the same output data. Example-
update record in the database and send it to the printer.
● Procedural Cohesion: Elements of procedural cohesion ensure the
order of execution. Actions are still weakly connected and unlikely to
be reusable. Ex- calculate student GPA, print student record,
calculate cumulative GPA, print cumulative GPA.
● Temporal Cohesion: The elements are related by their timing
involved. A module connected with temporal cohesion all the tasks
must be executed in the same time span. This cohesion contains the
code for initializing all the parts of the system. Lots of different
activities occur, all at unit time.
● Logical Cohesion: The elements are logically related and not
functionally. Ex- A component reads inputs from tape, disk, and

12
network. All the code for these functions is in the same component.
Operations are related, but the functions are significantly different.
● Coincidental Cohesion: The elements are not related(unrelated).
The elements have no conceptual relationship other than location in
source code. It is accidental and the worst form of cohesion. Ex- print
next line and reverse the characters of a string in a single
component.

Figure 2 :Cohesion

13
List principles of software design.

Figure 1 : principles of software design

Design means to draw or plan something to show the look, functions and working of
it.
Software Design is also a process to plan or convert the software requirements
into a step that are needed to be carried out to develop a software system. There
are several principles that are used to organize and arrange the structural
components of Software design. Software Designs in which these principles are
applied affect the content and the working process of the software from the
beginning.
These principles are stated below :
Principles Of Software Design :
1. Should not suffer from “Tunnel Vision” –
While designing the process, it should not suffer from “tunnel vision” which

14
means that is should not only focus on completing or achieving the aim
but on other effects also.
2. Traceable to analysis model –
The design process should be traceable to the analysis model which
means it should satisfy all the requirements that software requires to
develop a high-quality product.
3. Should not “Reinvent The Wheel” –
The design process should not reinvent the wheel that means it should not
waste time or effort in creating things that already exist. Due to this, the
overall development will get increased.
4. Minimize Intellectual distance –
The design process should reduce the gap between real-world problems
and software solutions for that problem meaning it should simply minimize
intellectual distance.
5. Exhibit uniformity and integration –
The design should display uniformity which means it should be uniform
throughout the process without any change. Integration means it should mix
or combine all parts of software i.e. subsystems into one system.
6. Accommodate change
The software should be designed in such a way that it accommodates the
change implying that the software should adjust to the change that is required
to be done as per the user’s need.
7. Degrade gently
The software should be designed in such a way that it degrades gracefully
which means it should work properly even if an error occurs during the
execution.
8. Assessed or quality
The design should be assessed or evaluated for the quality meaning that
during the evaluation, the quality of the design needs to be checked and
focused on.

15
9. Review to discover errors
The design should be reviewed which means that the overall evaluation
should be done to check if there is any error present or if it can be minimized.

10. Design is not coding and coding is not design


Design means describing the logic of the program to solve any problem and
coding is a type of language that is used for the implementation of a design.

16
Module 5 – Software Testing

1. Write a short note on unit testing?


Unit Testing is a software testing technique by means of which individual units of software i.e. group of
computer program modules, usage procedures, and operating procedures are tested to determine
whether they are suitable for use or not. It is a testing method using which every independent module is
tested to determine if there is an issue by the developer himself. It is correlated with the functional
correctness of the independent modules. Unit Testing is defined as a type of software testing where
individual components of a software are tested.

Objective of Unit Testing:

• The objective of Unit Testing is:

• To isolate a section of code.

• To verify the correctness of the code.

• To test every function and procedure.

• To fix bugs early in the development cycle and to save costs.

• To help the developers to understand the code base and enable them to make changes quickly.

• To help with code reuse.

Types of Unit Testing:

There are 2 types of Unit Testing: Manual, and Automated.

Workflow of Unit
Testing:
Unit Testing Techniques:

There are 3 types of Unit Testing Techniques. They are

• Black Box Testing: This testing technique is used in covering the unit tests for input, user
interface, and output parts.

• White Box Testing: This technique is used in testing the functional behavior of the system by
giving the input and checking the functionality output including the internal design structure and
code of the modules.

• Gray Box Testing: This technique is used in executing the relevant test cases, test methods, test
functions, and analyzing the code performance for the modules.

Advantages of Unit Testing:

2. Unit Testing allows developers to learn what functionality is provided by a unit and how to use it
to gain a basic understanding of the unit API.

3. Unit testing allows the programmer to refine code and make sure the module works properly.

4. Unit testing enables testing parts of the project without waiting for others to be completed.

Disadvantages of Unit Testing:

1. The process is time-consuming for writing the unit test cases.

2. Unit Testing will not cover all the errors in the module because there is a chance of having errors
in the modules while doing integration testing.

3. Unit Testing is not efficient for checking the errors in the UI(User Interface) part of the module.

4. It requires more time for maintenance when the source code is changed frequently.

5. It cannot cover the non-functional testing parameters such as scalability, the performance of the
system, etc.

2.Explain Integration Strategies?


Integration Testing is defined as a type of testing where software modules are integrated logically and
tested as a group. A typical software project consists of multiple software modules, coded by different
programmers. The purpose of this level of testing is to expose defects in the interaction between these
software modules when they are integrated
Integration Testing focuses on checking data communication amongst these modules. Hence it is also
termed as ‘I & T’ (Integration and Testing), ‘String Testing’ and sometimes ‘Thread Testing’.

Integration Test Case differs from other test cases in the sense it focuses mainly on the interfaces & flow
of data/information between the modules. Here priority is to be given for the integrating links rather
than the unit functions which are already tested.

Sample Integration Test Cases for the following scenario: Application has 3 modules say ‘Login Page’,
‘Mailbox’ and ‘Delete emails’ and each of them is integrated logically.

Here do not concentrate much on the Login Page testing as it’s already been done in Unit Testing. But
check how it’s linked to the Mail Box Page.

Similarly Mail Box: Check its integration to the Delete Mails Module.

Types of Integration Testing

Software Engineering defines variety of strategies to execute Integration testing, viz.

1. Big Bang Approach :

2. Incremental Approach: which is further divided into the following

3. Top Down Approach

4. Bottom Up Approach

5. Sandwich Approach – Combination of Top Down and Bottom Up

Big Bang Testing

Big Bang Testing is an Integration testing approach in which all the components or modules are
integrated together at once and then tested as a unit. This combined set of components is considered as
an entity while testing. If all of the components in the unit are not completed, the integration process
will not execute.

Advantages:
Convenient for small systems.

Disadvantages:

Fault Localization is difficult.

Given the sheer number of interfaces that need to be tested in this approach, some interfaces link to be
tested could be missed easily.

Since the Integration testing can commence only after “all” the modules are designed, the testing team
will have less time for execution in the testing phase.

Since all modules are tested at once, high-risk critical modules are not isolated and tested on priority.
Peripheral modules which deal with user interfaces are also not isolated and tested on priority.

Incremental Testing

In the Incremental Testing approach, testing is done by integrating two or more modules that are
logically related to each other and then tested for proper functioning of the application. Then the other
related modules are integrated incrementally and the process continues until all the logically related
modules are integrated and tested successfully.

Incremental Approach, in turn, is carried out by two different Methods:

Bottom Up

Top Down

Stubs and Drivers

Stubs and Drivers are the dummy programs in Integration testing used to facilitate the software testing
activity. These programs act as a substitutes for the missing models in the testing. They do not
implement the entire programming logic of the software module but they simulate data communication
with the calling module while testing.

Stub: Is called by the Module under Test.

Driver: Calls the Module to be tested.

Bottom-up Integration Testing

Bottom-up Integration Testing is a strategy in which the lower level modules are tested first. These
tested modules are then further used to facilitate the testing of higher level modules. The process
continues until all modules at top level are tested. Once the lower level modules are tested and
integrated, then the next level of modules are formed.

Diagrammatic Representation:
Advantages:

Fault localization is easier.

No time is wasted waiting for all modules to be developed unlike Big-bang approach

Disadvantages:

Critical modules (at the top level of software architecture) which control the flow of application are
tested last and may be prone to defects.

An early prototype is not possible

Top-down Integration Testing

Top Down Integration Testing is a method in which integration testing takes place from top to bottom
following the control flow of software system. The higher level modules are tested first and then lower
level modules are tested and integrated in order to check the software functionality. Stubs are used for
testing if some modules are not ready.

Diagrammatic Representation:

Advantages:

Fault Localization is easier.


Possibility to obtain an early prototype.

Critical Modules are tested on priority; major design flaws could be found and fixed first.

Disadvantages:

Needs many Stubs.

Modules at a lower level are tested inadequately.

Sandwich Testing

Sandwich Testing is a strategy in which top level modules are tested with lower level modules at the
same time lower modules are integrated with top modules and tested as a system. It is a combination of
Top-down and Bottom-up approaches therefore it is called Hybrid Integration Testing. It makes use of
both stubs as well as drivers.

How to do Integration Testing?

The Integration test procedure irrespective of the Software testing strategies (discussed above):

Prepare the Integration Tests Plan

Design the Test Scenarios, Cases, and Scripts.

Executing the test Cases followed by reporting the defects.

Tracking & re-testing the defects.

Steps 3 and 4 are repeated until the completion of Integration is successful.

5. Difference between alpha and beta testing?


6. Short note on system testing?
System Testing is a type of software testing that is performed on a complete integrated system to
evaluate the compliance of the system with the corresponding requirements. In system testing,
integration testing passed components are taken as input.

System Testing is Blackbox

Two Category of Software Testing

• Black Box Testing

• White Box Testing

System test falls under the black box testing category of software testing.

White box testing is the testing of the internal workings or code of a software application. In contrast,
black box or System Testing is the opposite. System test involves the external workings of the software
from the user’s perspective.

Testing the fully integrated applications including external peripherals in order to check how
components interact with one another and with the system as a whole. This is also called End to End
testing scenario.

Verify thorough testing of every input in the application to check for desired outputs.

Testing of the user’s experience with the application.

That is a very basic description of what is involved in system testing. You need to build detailed test
cases and test suites that test each aspect of the application as seen from the outside without looking
at the actual source code.

Software Testing Hierarchy

As with almost any software engineering process, software testing has a prescribed order in which
things should be done. The following is a list of software testing categories arranged in chronological
order. These are the steps taken to fully test new software in preparation for marketing it:

Unit testing performed on each module or block of code during development. Unit Testing is normally
done by the programmer who writes the code.

Integration testing done before, during and after integration of a new module into the main software
package. This involves testing of each individual code module. One piece of software can contain several
modules which are often created by several different programmers. It is crucial to test each module’s
effect on the entire program model.

System testing done by a professional testing agent on the completed software product before it is
introduced to the market.

Acceptance testing – beta testing of the product done by the actual end users.

Types of System Testing

Types of system testing a large software development company would typically use

• Usability Testing – mainly focuses on the user’s ease to use the application, flexibility in handling
controls and ability of the system to meet its objectives

• Load Testing – is necessary to know that a software solution will perform under real-life loads.

• Regression Testing – involves testing done to make sure none of the changes made over the
course of the development process have caused new bugs. It also makes sure no old bugs
appear from the addition of new software modules over time.

• Recovery Testing – is done to demonstrate a software solution is reliable, trustworthy and can
successfully recoup from possible crashes.
• Migration Testing – is done to ensure that the software can be moved from older system
infrastructures to current system infrastructures without any issues.

• Functional Testing – Also known as functional completeness testing, Functional Testing involves
trying to think of any possible missing functions. Testers might make a list of additional
functionalities that a product could have to improve it during functional testing.

• Hardware/Software Testing – IBM refers to Hardware/Software testing as “HW/SW Testing”.


This is when the tester focuses his/her attention on the interactions between the hardware and
software during system testing.

7. Different techniques in white box techniques?


White Box Testing is a testing technique in which software’s internal
structure, design, and coding are tested to verify input-output flow and
improve design, usability, and security. In white box testing, code is visible to
testers, so it is also called Clear box testing, Open box testing, Transparent
box testing, Code-based testing, and Glass box testing.
It is one of two parts of the Box Testing approach to software testing. Its
counterpart, Blackbox testing, involves testing from an external or end-user
perspective. On the other hand, White box testing in software engineering is
based on the inner workings of an application and revolves around internal
testing.
White box testing involves the testing of the software code for the following:
• Internal security holes
• Broken or poorly structured paths in the coding processes
• The flow of specific inputs through the code
• Expected output
• The functionality of conditional loops
• Testing of each statement, object, and function on an individual basis
The testing can be done at system, integration, and unit levels of software
development. One of the basic goals of whitebox testing is to verify a working flow
for an application. It involves testing a series of predefined inputs against expected
or desired outputs so that when a specific input does not result in the expected
output, you have encountered a bug.
How do you perform White Box Testing?
We have divided it into two basic steps to give you a simplified explanation of white
box testing. This is what testers do when testing an application using the white box
testing technique:

STEP 1) UNDERSTAND THE SOURCE CODE


The first thing a tester will often do is learn and understand the source code of the
application. Since white box testing involves the testing of the inner workings of an
application, the tester must be very knowledgeable in the programming languages
used in the applications they are testing. Also, the testing person must be highly
aware of secure coding practices. Security is often one of the primary objectives of
testing software. The tester should be able to find security issues and prevent
attacks from hackers and naive users who might inject malicious code into the
application either knowingly or unknowingly.

STEP 2) CREATE TEST CASES AND EXECUTE


The second basic step to white box testing involves testing the application’s source
code for proper flow and structure. One way is by writing more code to test the
application’s source code. The tester will develop little tests for each process or
series of processes in the application. This method requires that the tester must
have intimate knowledge of the code and is often done by the developer. Other
methods include Manual Testing, trial, and error testing and the use of testing tools
as we will explain further on in this article.

WhiteBox Testing Example


Consider the following piece of code
Printme (int a, int b) { ------------ Printme is a
function
int result = a+ b;
If (result> 0)
Print ("Positive", result)
Else
Print ("Negative", result)
} ----------- End of the source
code
The goal of WhiteBox testing in software engineering is to verify all the decision
branches, loops, and statements in the code.
To exercise the statements in the above white box testing example, WhiteBox test
cases would be
• A = 1, B = 1
• A = -1, B = -3

White Box Testing Techniques


A major White box testing technique is Code Coverage analysis. Code Coverage
analysis eliminates gaps in a Test Case suite. It identifies areas of a program that
are not exercised by a set of test cases. Once gaps are identified, you create test
cases to verify untested parts of the code, thereby increasing the quality of the
software product
There are automated tools available to perform Code coverage analysis. Below are
a few coverage analysis techniques a box tester can use:
Statement Coverage:- This technique requires every possible statement in the
code to be tested at least once during the testing process of software engineering.
Branch Coverage – This technique checks every possible path (if-else and other
conditional loops) of a software application.
Apart from above, there are numerous coverage types such as Condition Coverage,
Multiple Condition Coverage, Path Coverage, Function Coverage etc. Each
technique has its own merits and attempts to test (cover) all parts of software
code. Using Statement and Branch coverage you generally attain 80-90% code
coverage which is sufficient.
Following are important WhiteBox Testing Techniques:

• Statement Coverage
• Decision Coverage
• Branch Coverage
• Condition Coverage
• Multiple Condition Coverage
• Finite State Machine Coverage
• Path Coverage
• Control flow testing
• Data flow testing

Types of White Box Testing


White box testing encompasses several testing types used to evaluate the usability
of an application, block of code or specific software package. There are listed
below —
• Unit Testing: It is often the first type of testing done on an application. Unit
Testing is performed on each unit or block of code as it is developed. Unit
Testing is essentially done by the programmer. As a software developer, you
develop a few lines of code, a single function or an object and test it to make
sure it works before continuing Unit Testing helps identify a majority of bugs,
early in the software development lifecycle. Bugs identified in this stage are
cheaper and easy to fix.
• Testing for Memory Leaks: Memory leaks are leading causes of slower
running applications. A QA specialist who is experienced at detecting
memory leaks is essential in cases where you have a slow running software
application.
Apart from the above, a few testing types are part of both black box and white box
testing. They are listed below
• White Box Penetration Testing: In this testing, the tester/developer has full
information of the application’s source code, detailed network information,
IP addresses involved and all server information the application runs on. The
aim is to attack the code from several angles to expose security threats.
• White Box Mutation Testing: Mutation testing is often used to discover the
best coding techniques to use for expanding a software solution.
8. Explain Basis Path of Testing?
Basis Path Testing is a white-box testing technique based on the control structure of a program or a
module. Using this structure, a control flow graph is prepared and the various possible paths present in
the graph are executed as a part of testing. Therefore, by definition,

Basis path testing is a technique of selecting the paths in the control flow graph, that provide a basis set
of execution paths through the program or module.

Since this testing is based on the control structure of the program, it requires complete knowledge of
the program’s structure. To design test cases using this technique, four steps are followed :

• Construct the Control Flow Graph

• Compute the Cyclomatic Complexity of the Graph

• Identify the Independent Paths

• Design Test cases from Independent Paths

Basis path testing, a structured testing or white box testing technique used for
designing test cases intended to examine all possible paths of execution at least once.
Creating and executing tests for all possible paths results in 100% statement coverage
and 100% branch coverage.

Example:
1. Function fn_delete_element (int value, int array_size, int array[])
2. {
3. 1 int i;
4. location = array_size + 1;
5.
6. 2 for i = 1 to array_size
7. 3 if ( array[i] == value )
8. 4 location = i;
9. end if;
10. end for;
11.
12. 5 for i = location to array_size
13. 6 array[i] = array[i+1];
14. end for;
15. 7 array_size --;
16. }
Steps to Calculate the independent paths
Step 1 : Draw the Flow Graph of the Function/Program under consideration as shown
below:

17.

Step 2 : Determine the independent paths.

Path 1: 1-2-5-7
Path 2: 1-2-5-6-7
Path 3: 1-2-3-2-5-6-7
Path 4: 1-2-3-4-2-5-6-7

9. Explain Cyclomatic complexity with suitable example?


Cyclomatic complexity of a code section is the quantitative measure of the
number of linearly independent paths in it. It is a software metric used to
indicate the complexity of a program. It is computed using the Control Flow
Graph of the program. The nodes in the graph indicate the smallest group of
commands of a program, and a directed edge in it connects the two nodes i.e. if
second command might immediately follow the first command.
For example, if source code contains no control flow statement then its
cyclomatic complexity will be 1 and source code contains a single path in it.
Similarly, if the source code contains one if condition then cyclomatic
complexity will be 2 because there will be two paths one for true and the other
for false.
Mathematically, for a structured program, the directed graph inside control flow
is the edge joining two basic blocks of the program as control may pass from
first to second.
So, cyclomatic complexity M would be defined as,

M = E – N + 2P
where,
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components

Steps that should be followed in calculating cyclomatic complexity and test


cases design are:

• Construction of graph with nodes and edges from code.


• Identification of independent paths.
• Cyclomatic Complexity Calculation
• Design of Test Cases
Let a section of code as such:

A = 10
IF B > C THEN
A = B
ELSE
A = C
ENDIF
Print A
Print B
Print C

Control Flow Graph of above code


The cyclomatic complexity calculated for above code will be from control flow
graph. The graph shows seven shapes(nodes), seven lines(edges), hence
cyclomatic complexity is 7-7+2 = 2.
Use of Cyclomatic Complexity:

•Determining the independent path executions thus proven to be very


helpful for Developers and Testers.
• It can make sure that every path have been tested at least once.
• Thus help to focus more on uncovered paths.
• Code coverage can be improved.
• Risk associated with program can be evaluated.
• These metrics being used earlier in the program helps in reducing the
risks.
Advantages of Cyclomatic Complexity:.
• It can be used as a quality metric, gives relative complexity of various
designs.
• It is able to compute faster than the Halstead’s metrics.
• It is used to measure the minimum effort and best areas of
concentration for testing.
• It is able to guide the testing process.
• It is easy to apply.
Disadvantages of Cyclomatic Complexity:
• It is the measure of the programs’s control complexity and not the data
complexity.
• In this, nested conditional structures are harder to understand than
non-nested structures.
• In case of simple comparisons and decision structures, it may give a
misleading figure.

10.Explain Black Box testing?


Black Box Testing is a software testing method in which the functionalities of
software applications are tested without having knowledge of internal code
structure, implementation details and internal paths. Black Box Testing mainly
focuses on input and output of software applications and it is entirely based on
software requirements and specifications. It is also known as Behavioral Testing.

The above Black-Box can be any software system you want to test. For Example, an
operating system like Windows, a website like Google, a database like Oracle or
even your own custom application.
Here are the generic steps followed to carry out any type of Black Box Testing.
• Initially, the requirements and specifications of the system are examined.
• Tester chooses valid inputs (positive test scenario) to check whether SUT
processes them correctly. Also, some invalid inputs (negative test scenario)
are chosen to verify that the SUT is able to detect them.
• Tester determines expected outputs for all those inputs.
• Software tester constructs test cases with the selected inputs.
• The test cases are executed.
• Software tester compares the actual outputs with the expected outputs.
• Defects if any are fixed and re-tested.
Types of Black Box Testing
There are many types of Black Box Testing but the following are the prominent
ones –
• Functional testing – This black box testing type is related to the functional
requirements of a system; it is done by software testers.
• Non-functional testing – This type of black box testing is not related to
testing of specific functionality, but non-functional requirements such as
performance, scalability, usability.
• Regression testing – Regression Testing is done after code fixes, upgrades
or any other system maintenance to check the new code has not affected the
existing code.

Tools used for Black Box Testing:


Tools used for Black box testing largely depends on the type of black box testing
you are doing.
• For Functional/ Regression Tests you can use – QTP, Selenium
• For Non-Functional Tests, you can use – LoadRunner, Jmeter

Black Box Testing Techniques


Following are the prominent Test Strategy amongst the many used in Black box
Testing
• Equivalence Class Testing: It is used to minimize the number of possible
test cases to an optimum level while maintains reasonable test coverage.
• Boundary Value Testing: Boundary value testing is focused on the values at
boundaries. This technique determines whether a certain range of values are
acceptable by the system or not. It is very useful in reducing the number of
test cases. It is most suitable for the systems where an input is within certain
ranges.
• Decision Table Testing: A decision table puts causes and their effects in a
matrix. There is a unique combination in each column.
10. Difference between white and black box testing?

10.Write a short note on software Maintenance?


Software Maintenance is the process of modifying a software product after it
has been delivered to the customer. The main purpose of software maintenance
is to modify and update software applications after delivery to correct faults and
to improve performance.
Need for Maintenance –
Software Maintenance must be performed in order to:
• Correct faults.
• Improve the design.
• Implement enhancements.
• Interface with other systems.
• Accommodate programs so that different hardware, software, system
features, and telecommunications facilities can be used.
• Migrate legacy software.
• Retire software.
Challenges in Software Maintenance:
The various challenges in software maintenance are given below:
• The popular age of any software program is taken into consideration
up to ten to fifteen years. As software program renovation is open
ended and might maintain for decades making it very expensive.
• Older software program’s, which had been intended to paintings on
sluggish machines with much less reminiscence and garage ability can
not maintain themselves tough in opposition to newly coming more
advantageous software program on contemporary-day hardware.
• Changes are frequently left undocumented which can also additionally
reason greater conflicts in future.
• As era advances, it turns into high priced to preserve vintage software
program.
• Often adjustments made can without problems harm the authentic
shape of the software program, making it difficult for any next
adjustments.
Categories of Software Maintenance –
Maintenance can be divided into the following:

1. Corrective maintenance:
Corrective maintenance of a software product may be essential either
to rectify some bugs observed while the system is in use, or to
enhance the performance of the system.

2. Adaptive maintenance:
This includes modifications and updations when the customers need
the product to run on new platforms, on new operating systems, or
when they need the product to interface with new hardware and
software.

3. Perfective maintenance:
A software product needs maintenance to support the new features
that the users want or to change different types of functionalities of the
system according to the customer demands.

4. Preventive maintenance:
This type of maintenance includes modifications and updations to
prevent future problems of the software. It goals to attend problems,
which are not significant at this moment but may cause serious issues
in future.

11.Different types of Software maintenance?


Software may need maintenance for any number of reasons – to keep it up and running,
to enhance features, to rework the system for changes into the future, to move to the
Cloud, or any other changes. Whatever the motivation is for software maintenance, it is
vital for the success of your business. As such, software maintenance is more than
simply finding and fixing bugs. It is keeping the heart of your business up and running.

There are four types of software maintenance:

• Corrective Software Maintenance


• Adaptive Software Maintenance
• Perfective Software Maintenance
• Preventive Software Maintenance

Corrective Software Maintenance

Corrective software maintenance is what one would typically associate with the
maintenance of any kind. Correct software maintenance addresses the errors and faults
within software applications that could impact various parts of your software, including
the design, logic, and code. These corrections usually come from bug reports that were
created by users or customers – but corrective software maintenance can help to spot
them before your customers do, which can help your brand’s reputation.

Adaptive Software Maintenance

Adaptive software maintenance becomes important when the environment of your


software changes. This can be brought on by changes to the operating system,
hardware, software dependencies, Cloud storage, or even changes within the operating
system. Sometimes, adaptive software maintenance reflects organizational policies or
rules as well. Updating services, making modifications to vendors, or changing payment
processors can all necessitate adaptive software maintenance.
Perfective Software Maintenance

Perfective software maintenance focuses on the evolution of requirements and features


that existing in your system. As users interact with your applications, they may notice
things that you did not or suggest new features that they would like as part of the
software, which could become future projects or enhancements. Perfective software
maintenance takes over some of the work, both adding features that can enhance user
experience and removing features that are not effective and functional. This can include
features that are not used or those that do not help you to meet your end goals.

Preventive Software Maintenance

Preventative Software Maintenance helps to make changes and adaptations to your


software so that it can work for a longer period of time. The focus of the type of
maintenance is to prevent the deterioration of your software as it continues to adapt and
change. These services can include optimizing code and updating documentation as
needed.

Updating software environments, reducing deterioration, and enhancing what is already


there to help satisfy the needs of all users are also included in the software
maintenance examples.

12.Write a short note on reverse engineering software?


Software Reverse Engineering is a process of recovering the design,
requirement specifications and functions of a product from an analysis of its
code. It builds a program database and generates information from this.
The purpose of reverse engineering is to facilitate the maintenance work by
improving the understandability of a system and to produce the necessary
documents for a legacy system.
Reverse Engineering Goals:
• Cope with Complexity.
• Recover lost information.
• Detect side effects.
• Synthesise higher abstraction.
• Facilitate Reuse.

Steps of Software Reverse Engineering:

1. Collection Information:
This step focuses on collecting all possible information (i.e., source
design documents etc.) about the software.

2. Examining the information:


The information collected in step-1 as studied so as to get familiar with
the system.

3. Extracting the structure:


This step concerns with identification of program structure in the form
of structure chart where each node corresponds to some routine.
4. Recording the functionality:
During this step processing details of each module of the structure,
charts are recorded using structured language like decision table, etc.

5. Recording data flow:


From the information extracted in step-3 and step-4, set of data flow
diagrams are derived to show the flow of data among the processes.

6. Recording control flow:


High level control structure of the software is recorded.

7. Review extracted design:


Design document extracted is reviewed several times to ensure
consistency and correctness. It also ensures that the design
represents the program.

8. Generate documentation:
Finally, in this step, the complete documentation including SRS,
design document, history, overview, etc. are recorded for future use.
Module 6 - Software Configuration Management, Quality
Assurance and Maintenance

Q.1] Explain the different types of software Maintenance.


Ans: Software Maintenance Types are the techniques applied as the last part of the software
development process that aids in keeping the functionality and updates to match the customer’s needs,
after a round of user acceptance testing or the business acceptance testing processes. The main reason
to uphold this step is to track the quality assurance and fault tolerance records for continued progress
in terms of the software application’s efficacy and performance.
The below are the various types of Software Maintenance:

1. Corrective Software Maintenance


• The Corrective software maintenance is the naturally chosen way to employ an update activity
on a software application system. It is used for identifying and keeping track of all the defects
in the application that can possibly create a bigger dent in the application’s performance.
• This type of maintenance can be applied on every stage of the software development life cycle
such as the design phase, the requirement analysis phase, and the code building phase as well.
• This type of software maintenance technique can support in finding the areas that require the
corrections to be implemented in an earlier point of time, as an effort to guard the client’s
product ability and their eminence.
• This process is incorporated to handle the merge of the defect fixes with the existing system,
to enable the latest updated software application, after the defect tracking activities are
complete.
2. Adaptive Software Maintenance
• The Adaptive software maintenance is one of the imperative practices, where there are
chances of a variable environment to place the software application on, for a successful
functional flow to get executed.
• The environment in this case in point can be a hardware or software supporting units, such as
the RAM, the system processing memory, storage type, the system platforms, the supporting
applications, the plugins, additional library files, etc.
• The activities involved in this process can be revising the existing process, adjust the
functional flow of the application, increased visibility to the users, technical alterations on the
code, etc.
• This process is made up of accustoming the programs with respect to the alterations in the
settings where the software is situated to run on. In most organizations, the software
professionals prefer the environment settings to be separately managed for different processes
like the development process, the initial testing process, the user acceptance testing, and the
life for actual product functioning.

3. Perfective Software Maintenance


• The Perfective software maintenance process is centered on the modifications in the
requirements and functionalities of the already established software’s processing.
• After the development is completed and after the preliminary testing process is completed, the
user acceptance testing will be performed in order to validate the product with respect to the
user’s visibility, instead of testing the system based on technicality behind building the
software.
• Hence, there is a high possibility for the user in the user acceptance testing process to identify
the usability flaws in the software. These flaws can then be termed as the change requests,
additional requirements, or the part of the future requirements.
• This process results in higher visibility on estimating what the user is expected from the product
as a whole application.

4. Preventive Software Maintenance


• The Preventative Software Maintenance, just as the name says, is used for avoiding any future
downfalls with respect to the software performance and scalability of the overall system.
• The main intent of employing this type of software maintenance process is to check the weak
elements in terms of adaptability to new and upcoming changes for the software application.
• This facilitates in reducing the number of potentially hazardous behavior from the software
functional standpoint, and in maintaining the stability of the application.
• A few of the routine processes involved in the preventive software maintenance activity are the
code evaluation to meet with the organization’s coding standards, keeping in check the run time
values of the code execution, competent memory occupancy, estimation of code intricacy, etc.

Q.2] Explain Change Control and Version Control in SCM.


Ans: Software configuration management (SCM) also known as change management, It is an umbrella
activity that is applied throughout the software process. It's goal is to maximize productivity by
minimizing mistakes caused by confusion when coordinating software development. SCM identifies,
organizes, and controls modifications to the software being
built by a software development team. SCM activities are formulated to identify change, control
change, ensure that change is being properly implemented, and report changes to others who may
have an interest.
Two of the tasks of SCM are Change Control and Version Control:
1. Change Control
Change control is a procedural activity that ensures quality and consistency as changes are made to
a configuration object. A change request is submitted to a configuration control authority, which is
usually a change control board (CCB). The request is evaluated for technical merit, potential side
effects, overall impact on other configuration objects and system functions, and projected cost in
terms of money, time, and resources.
An engineering change order (ECO) is issued for each approved change request
• Describes the change to be made, the constraints to follow, and the criteria for review and
audit.
The baselined CSCI is obtained from the SCM repository:
• Access control governs which software engineers have the authority to access and modify a
particular configuration object.
• Synchronization control helps to ensure that parallel changes performed by two different
people don't overwrite one another.

2. Version Control
Version control is a set of procedures and tools for managing the creation and use of multiple
occurrences of objects in the SCM repository.
Required version control capabilities:
• An SCM repository that stores all relevant configuration objects.
• A version management capability that stores all versions of a configuration object (or
enables any version to be constructed using differences from past versions).
• A make facility that enables the software engineer to collect all relevant configuration
objects and construct a specific version of the software.
• Issues tracking (bug tracking) capability that enables the team to record and track the status
of all outstanding issues associated with each configuration object.
The SCM repository maintains a change set:
• Serves as a collection of all changes made to a baseline configuration:
• Used to create a specific version of the software.
• Captures all changes to all files in the configuration along with the reason for changes and
details of who made the changes and when.

Q.3] Explain different metrics used for maintaining Software Quality.


Ans: In Software engineering Software Quality Assurance (SAQ) assures the quality of the software.
Set of activities in SAQ are continuously applied throughout the software process. Software Quality is
measured based on some software quality metrics.
Software metrics can be classified into three categories −

• Product metrics − Describes the characteristics of the product such as size, complexity,
design features, performance, and quality level.
• Process metrics − These characteristics can be used to improve the development and
maintenance activities of the software.
• Project metrics − This metrics describe the project characteristics and execution. Examples
include the number of software developers, the staffing pattern over the life cycle of the
software, cost, schedule, and productivity.

1. Product Quality Metrics


This metrics include the following −
• Mean Time to Failure
• Defect Density
• Customer Problems
• Customer Satisfaction

• Mean Time to Failure


It is the time between failures. This metric is mostly used with safety critical systems such as the
airline traffic control systems, avionics, and weapons.
• Defect Density
It measures the defects relative to the software size expressed as lines of code or function point, etc.
i.e., it measures code quality per unit. This metric is used in many commercial software systems.
• Customer Problems
It measures the problems that customers encounter when using the product. It contains the customer’s
perspective towards the problem space of the software, which includes the non-defect oriented
problems together with the defect problems.
• Customer Satisfaction
Customer satisfaction is often measured by customer survey data through the five-point scale −
• Very satisfied
• Satisfied
• Neutral
• Dissatisfied
• Very dissatisfied
Satisfaction with the overall quality of the product and its specific dimensions is usually obtained
through various methods of customer surveys.

2. In-process Quality Metrics


In-process quality metrics deals with the tracking of defect arrival during formal machine testing for
some organizations. This metric includes −
• Defect density during machine testing
• Defect arrival pattern during machine testing
• Phase-based defect removal pattern
• Defect removal effectiveness

• Defect density during machine testing


Defect rate during formal machine testing (testing after code is integrated into the system library) is
correlated with the defect rate in the field. Higher defect rates found during testing is an indicator that
the software has experienced higher error injection during its development process, unless the higher
testing defect rate is due to an extraordinary testing effort.
This simple metric of defects per KLOC or function point is a good indicator of quality, while the
software is still being tested. It is especially useful to monitor subsequent releases of a product in the
same development organization.

• Defect arrival pattern during machine testing


The overall defect density during testing will provide only the summary of the defects. The pattern
of defect arrivals gives more information about different quality levels in the field. It includes the
following −
1. The defect arrivals or defects reported during the testing phase by time interval (e.g., week). Here
all of which will not be valid defects.
2. The pattern of valid defect arrivals when problem determination is done on the reported problems.
This is the true defect pattern.

• Phase-based defect removal pattern


This is an extension of the defect density metric during testing. In addition to testing, it tracks the
defects at all phases of the development cycle, including the design reviews, code inspections, and
formal verifications before testing.
Because a large percentage of programming defects is related to design problems, conducting formal
reviews, or functional verifications to enhance the defect removal capability of the process at the
front-end reduces error in the software. The pattern of phase-based defect removal reflects the overall
defect removal ability of the development process.

• Defect removal effectiveness


It can be defined as follows –

This metric can be calculated for the entire development process, for the front-end before code
integration and for each phase. It is called early defect removal when used for the front-end and
phase effectiveness for specific phases. The higher the value of the metric, the more effective the
development process and the fewer the defects passed to the next phase or to the field. This metric is
a key concept of the defect removal model for software development.

3. Maintenance Quality Metrics


Although much cannot be done to alter the quality of the product during this phase, following are the fixes that
can be carried out to eliminate the defects as soon as possible with excellent fix quality.
• Fix backlog and backlog management index
• Fix response time and fix responsiveness
• Percent delinquent fixes

• Fix backlog and backlog management index


Fix backlog is related to the rate of defect arrivals and the rate at which fixes for reported problems become
available. It is a simple count of reported problems that remain at the end of each month or each week. Using
it in the format of a trend chart, this metric can provide meaningful information for managing the maintenance
process.
Backlog Management Index (BMI) is used to manage the backlog of open and unresolved problems.

If BMI is larger than 100, it means the backlog is reduced. If BMI is less than 100, then the backlog increased.

• Fix response time and fix responsiveness


The fix response time metric is usually calculated as the mean time of all problems from open to close. Short
fix response time leads to customer satisfaction.
The important elements of fix responsiveness are customer expectations, the agreed-to fix time, and the ability
to meet one's commitment to the customer.

• Percent delinquent fixes


It is calculated as follows −
PercentDelinquentFixes=

Q.4] What are Software Risks? Write a note on RMMM for delayed projects.
Ans: A risk is a potential problem – it might happen and it might not.
Conceptual definition of risk:
• Risk concerns future happenings
• Risk involves change in mind, opinion, actions, places, etc.
• Risk involves choice and the uncertainty that choice entails

Two characteristics of risk


• Uncertainty – the risk may or may not happen, that is, there are no 100% risks (those,
instead, are called constraints)
• Loss – the risk becomes a reality and unwanted consequences or losses occur
Different types of risk:

1. Project Risk
Project risk arises in the software development process then they basically affect budget, schedule,
staffing, resources, and requirement. When project risks become severe then the total cost of project
get increased.
2. Technical Risk
These risks affect quality and timeliness of the project. If technical risk becomes reality then potential
design, implementation, interface, verification and maintenance problem gets created. Technical risks
occur when problem becomes harder to solve.
3. Business Risk
When feasibility of software product is in suspect then business risks occur. Business risks can be
classified as follows
i. Market Risk
When quality of software product built but if there is no customer for this product then it is called
market risk (i.e. no market for product).
ii. Strategic Risk
When product is built and if it is not following the company’s business policies then such product
brings strategic risks.
iii. Sales Risk
When product is built but how to sell is not clear then such situation brings sales risk.
iv. Management Risk
When senior management or the responsible staff leaves the organizations then management risk
occur.
v. Budget Risk
Losing the overall budget of the project called Budget risk.

Known risks are those that are identified by evaluating the project plan. There are two types of known
risk
a. Predictable Risk
Predictable risk are those that can be identified in advance based on past project experience
b. Unpredictable Risk
Unpredictable risks are those that cannot be guessed earlier.

RMMM
RMM stands for risk mitigation, monitoring and management. There are three issues in strategy for
handling the risk is
• Risk Avoidance
• Risk Monitoring
• Risk Management

• Risk Mitigation
Risk mitigation means preventing the risk to occur (risk avoidance). Following are the steps to be taken
for mitigating the risks.
1. Communicate with the concerned staff to find of probable risk.
2. Find out and eliminate all those causes that can create risk before the project starts.
3. Develop a policy in an organization which will help to continue the project even through same
staff leaves the organization.
4. Everybody in the project team should be acquainted with the current development activity
5. Maintain the corresponding documents in timely manner
6. Conduct timely reviews in order to speed up work.
7. For conducting every critical activity during software development, provide the additional staff if
required.

• Risk Monitoring
In Risk Monitoring process following thing must be monitored by the project manager.
1. The approach and behaviour of the team member as pressure of project varies.
2. The degree in which the team performs with the spirit of “Team-Work”.
3. The type of cooperation between the team members.
4. The type of problem occur in team member.
5. Availability of jobs within and outside of the organization.
The objective of risk mitigation is:
1. To check whether the predicted risk really occur or not.
2. To ensure the steps defined to avoid the risk are applied properly or not.
3. To gather the information this can be useful for analyzing the risk.

• Risk Management
Project manager performs this task when risk becomes a reality. If project manager is successful in
applying the project mitigation effectively then it becomes very much easy to manage the risks.
For example,
Consider a scenario that many people are leaving the organization then if sufficient additional staff is
available, if current development activity is known to everybody in the team, if latest and systematic
documentation is available then any ‘new comer’ can easily understand current development activity.
This will ultimately help in continuing the work without any interval.

Q.5] Discuss the different categories of risk that help to define impact values in a
risk table.
Ans: Predictable Risk Categories to be consider in Risk Table:
1. Product size – risks associated with overall size of the software to be built
2. Business impact – risks associated with constraints imposed by management or the marketplace.
3. Customer characteristics – risks associated with sophistication of the customer and the developer's
ability to communicate with the customer in a timely manner.
4. Process definition – risks associated with the degree to which the software process has been defined
and is followed.
5. Development environment – risks associated with availability and quality of the tools to be used to
build the project.
6. Technology to be built – risks associated with complexity of the system to be built and the "newness"
of the technology in the system.
7. Staff size and experience – risks associated with overall technical and project experience of the
software engineers who will do the work.

Contents of a Risk Table


A risk table provides a project manager with a simple technique for risk projection or estimation.
It consists of five columns:
• Risk Summary – short description of the risk
• Risk Category – one of seven risk categories
• Probability – estimation of risk occurrence based on group input
• Impact – (1) catastrophic (2) critical (3) marginal (4) negligible
• RMMM – Pointer to a paragraph in the Risk Mitigation, Monitoring, and Management Plan

Developing a Risk Table


List all risks in the first column (by way of the help of the risk item checklists)
1. Mark the category of each risk.
2. Estimate the probability of each risk occurring.
3. Assess the impact of each risk based on an averaging of the four risk components to determine
an overall impact value.
4. Sort the rows by probability and impact in descending order.
5. Draw a horizontal cutoff line in the table that indicates the risks that will be given further
attention.

Assessing Risk Impact


1. Three factors affect the consequences that are likely if a risk does occur
• Its nature – This indicates the problems that are likely if the risk occurs
• Its scope – This combines the severity of the risk (how serious was it) with its overall
distribution (how much was affected)
• Its timing – This considers when and for how long the impact will be felt
2. The overall risk exposure formula is RE = P x C
• P = the probability of occurrence for a risk
• C = the cost to the project should the risk actually occur
3. Example
• P = 80% probability that 18 of 60 software components will have to be developed
• C = Total cost of developing 18 components is $25,000
• RE = .80 x $25,000 = $20,000
Q.6] What is FTR in SQA? What are its objectives? Explain the steps in FTR.
Ans: Formal Technical Review (FTR) is a software quality control activity performed by software
engineers.
Objectives:
• To uncover errors in function, logic, or implementation for any representation of the software
• To verify that the software under review meets its requirements
• To ensure that the software has been represented according to predefined standards
• To achieve software that is developed in a uniform manner
• To make projects more manageable

The review meeting: Each review meeting should be held considering the following constraints-
Involvement of people:
• Between 3, 4 and 5 people should be involve in the review.
• Advance preparation should occur but it should be very short that is at the most 2 hours of work
for every person.
• The short duration of the review meeting should be less than two hour. Gives these constraints, it
should be clear that an FTR focuses on specific (and small) part of the overall software.
At the end of the review, all attendees of FTR must decide what to do.
• Accept the product without any modification.
• Reject the project due to serious error (Once corrected, another app need to be reviewed), or
• Accept the product provisional (minor errors are encountered and should be corrected, but no
additional review will be required).
The decision was made, with all FTR attendees completing a sign-of indicating their participation in
the review and their agreement with the findings of the review team.

Review reporting and record keeping :-


• During the FTR, the reviewer actively records all issues that have been raised.
• At the end of the meeting all these issues raised are consolidated and a review list is prepared.
• Finally, a formal technical review summary report is prepared.
It answers three questions :-
1. What was reviewed ?
2. Who reviewed it ?
3. What were the findings and conclusions ?

Review guidelines :- Guidelines for the conducting of formal technical reviews should be established
in advance. These guidelines must be distributed to all reviewers, agreed upon, and then followed. A
review that is unregistered can often be worse than a review that does not minimum set of guidelines
for FTR.
1. Review the product, not the manufacture (producer).
2. Take written notes (record purpose)
3. Limit the number of participants and insists upon advance preparation.
4. Develop a checklist for each product that is likely to be reviewed.
5. Allocate resources and time schedule for FTRs in order to maintain time schedule.
6. Conduct meaningful training for all reviewers in order to make reviews effective.
7. Reviews earlier reviews which serve as the base for the current review being conducted.
8. Set an agenda and maintain it.
9. Separate the problem areas, but do not attempt to solve every problem notes.
10. Limit debate and rebuttal.
Q.7] Short Note on Risk management.
Ans: Risk management is the process of identifying, assessing, and prioritizing the risks to minimize,
monitor, and control the probability of unfortunate events.
Risk Management Process:
Risk Management process can be easily understood with use of the following workflow:

A software project can be concerned with a large variety of risks. In order to be adept to systematically
identify the significant risks which might affect a software project, it is essential to classify risks into
different classes. The project manager can then check which risks from each class are relevant to the
project.
There are three main classifications of risks which can affect a software project:
1. Project risks
2. Technical risks
3. Business risks
1. Project risks: Project risks concern differ forms of budgetary, schedule, personnel, resource, and
customer-related problems. A vital project risk is schedule slippage. Since the software is
intangible, it is very tough to monitor and control a software project. It is very tough to control
something which cannot be identified. For any manufacturing program, such as the manufacturing
of cars, the plan executive can recognize the product taking shape.

2. Technical risks: Technical risks concern potential method, implementation, interfacing, testing,
and maintenance issue. It also consists of an ambiguous specification, incomplete specification,
changing specification, technical uncertainty, and technical obsolescence. Most technical risks
appear due to the development team's insufficient knowledge about the project.

3. Business risks: This type of risks contain risks of building an excellent product that no one need,
losing budgetary or personnel commitments, etc.

Other Risk Categories:


1. Known risks: Those risks that can be uncovered after careful assessment of the project program,
the business and technical environment in which the plan is being developed, and more reliable data
sources (e.g., unrealistic delivery date)
2. Predictable risks: Those risks that are hypothesized from previous project experience (e.g., past
turnover)
3. Unpredictable risks: Those risks that can and do occur, but are extremely tough to identify in
advance.

Principle of Risk Management:


1. Global Perspective: In this, we review the bigger system description, design, and implementation.
We look at the chance and the impact the risk is going to have.
2. Take a forward-looking view: Consider the threat which may appear in the future and create
future plans for directing the next events.
3. Open Communication: This is to allow the free flow of communications between the client and
the team members so that they have certainty about the risks.
4. Integrated management: In this method risk management is made an integral part of project
management.
5. Continuous process: In this phase, the risks are tracked continuously throughout the risk
management paradigm.

Q.8] Explain Software Quality Assurance


Ans: Software Quality Assurance (SQA) is simply a way to assure quality in the software. It is the
set of activities which ensure processes, procedures as well as standards are suitable for the project and
implemented correctly.
Software Quality Assurance is a process which works parallel to development of software. It focuses
on improving the process of development of software so that problems can be prevented before they
become a major issue. Software Quality Assurance is a kind of Umbrella activity that is applied
throughout the software process.
Software Quality Assurance has:
• A quality management approach
• Formal technical reviews
• Multi testing strategy
• Effective software engineering technology
• Measurement and reporting mechanism

Major Software Quality Assurance Activities:


• SQA Management Plan:
Make a plan for how you will carry out the sqa through out the project. Think about which set
of software engineering activities are the best for project. check level of sqa team skills.

• Set The Check Points:


SQA team should set checkpoints. Evaluate the performance of the project on the basis of
collected data on different check points.

• Multi testing Strategy:


Do not depend on a single testing approach. When you have a lot of testing approaches
available use them.

• Measure Change Impact:


The changes for making the correction of an error sometimes re introduces more errors keep
the measure of impact of change on project. Reset the new change to change check the
compatibility of this fix with whole project.

• Manage Good Relations:


In the working environment managing good relations with other teams involved in the project
development is mandatory. Bad relation of sqa team with programmers’ team will impact
directly and badly on project. Don’t play politics.

Benefits of Software Quality Assurance (SQA):


1. SQA produces high quality software.
2. High quality application saves time and cost.
3. SQA is beneficial for better reliability.
4. SQA is beneficial in the condition of no maintenance for a long time.
5. High quality commercial software increase market share of company.
6. Improving the process of creating software.
7. Improves the quality of the software.

Disadvantage of SQA:
There are a number of disadvantages of quality assurance. Some of them include adding more
resources, employing more workers to help maintain quality and so much more.

You might also like