You are on page 1of 37

UNIT-2

Requirement Engineering

Requirements engineering (RE) refers to the process of defining, documenting, and


maintaining requirements in the engineering design process. Requirement engineering
provides the appropriate mechanism to understand what the customer desires, analyzing the
need, and assessing feasibility, negotiating a reasonable solution, specifying the solution
clearly, validating the specifications and managing the requirements as they are transformed
into a working system. Thus, requirement engineering is the disciplined application of
proven principles, methods, tools, and notation to describe a proposed system's intended
behavior and its associated constraints.

Requirement Engineering Process

It is a four-step process, which includes -

1. Feasibility Study
2. Requirement Elicitation and Analysis
3. Software Requirement Specification
4. Software Requirement Validation
5. Software Requirement Management
1. Feasibility Study:

The objective behind the feasibility study is to create the reasons for developing the
software that is acceptable to users, flexible to change and conformable to established
standards.

Types of Feasibility:

1. Technical Feasibility - Technical feasibility evaluates the current technologies,


which are needed to accomplish customer requirements within the time and budget.
2. Operational Feasibility - Operational feasibility assesses the range in which the
required software performs a series of levels to solve business problems and customer
requirements.
3. Economic Feasibility - Economic feasibility decides whether the necessary software
can generate financial profits for an organization.

2. Requirement Elicitation and Analysis:

This is also known as the gathering of requirements. Here, requirements are identified
with the help of customers and existing systems processes, if available.
Analysis of requirements starts with requirement elicitation. The requirements are analyzed
to identify inconsistencies, defects, omission, etc. We describe requirements in terms of
relationships and also resolve conflicts if any.

Problems of Elicitation and Analysis

o Getting all, and only, the right people involved.


o Stakeholders often don't know what they want
o Stakeholders express requirements in their terms.
o Stakeholders may have conflicting requirements.
o Requirement change during the analysis process.
o Organizational and political factors may influence system requirements.

3. Software Requirement Specification:

Software requirement specification is a kind of document which is created by a software


analyst after the requirements collected from the various sources - the requirement received
by the customer written in ordinary language. It is the job of the analyst to write the
requirement in technical language so that they can be understood and beneficial by the
development team.
The models used at this stage include ER diagrams, data flow diagrams (DFDs), function
decomposition diagrams (FDDs), data dictionaries, etc.

o Data Flow Diagrams: Data Flow Diagrams (DFDs) are used widely for modeling
the requirements. DFD shows the flow of data through a system. The system may be
a company, an organization, a set of procedures, a computer hardware system, a
software system, or any combination of the preceding. The DFD is also known as a
data flow graph or bubble chart.
o Data Dictionaries: Data Dictionaries are simply repositories to store information
about all data items defined in DFDs. At the requirements stage, the data dictionary
should at least define customer data items, to ensure that the customer and developers
use the same definition and terminologies.
o Entity-Relationship Diagrams: Another tool for requirement specification is the
entity-relationship diagram, often called an "E-R diagram." It is a detailed logical
representation of the data for the organization and uses three main constructs i.e. data
entities, relationships, and their associated attributes.

4. Software Requirement Validation:

After requirement specifications developed, the requirements discussed in this document


are validated. The user might demand illegal, impossible solution or experts may
misinterpret the needs. Requirements can be the check against the following conditions -

o If they can practically implement


o If they are correct and as per the functionality and specially of software
o If there are any ambiguities
o If they are full
o If they can describe

Requirements Validation Techniques

o Requirements reviews/inspections: systematic manual analysis of the requirements.


o Prototyping: Using an executable model of the system to check requirements.
o Test-case generation: Developing tests for requirements to check testability.
o Automated consistency analysis: checking for the consistency of structured
requirements descriptions.

Software Requirement Management:

Requirement management is the process of managing changing requirements during the


requirements engineering process and system development.
New requirements emerge during the process as business needs a change, and a better
understanding of the system is developed.

The priority of requirements from different viewpoints changes during development


process.

The business and technical environment of the system changes during the development.

Prerequisite of Software requirements

Collection of software requirements is the basis of the entire software development project.
Hence they should be clear, correct, and well-defined.

A complete Software Requirement Specifications should be:

o Clear
o Correct
o Consistent
o Coherent
o Comprehensible
o Modifiable
o Verifiable
o Prioritized
o Unambiguous
o Traceable
o Credible source

Software Requirements: Largely software requirements must be categorized into two


categories:

1. Functional Requirements: Functional requirements define a function that a system


or system element must be qualified to perform and must be documented in different
forms. The functional requirements are describing the behavior of the system as it
correlates to the system's functionality.
2. Non-functional Requirements: This can be the necessities that specify the criteria
that can be used to decide the operation instead of specific behaviors of the system.
Non-functional requirements are divided into two main categories:
o Execution qualities like security and usability, which are observable at run
time.
o Evolution qualities like testability, maintainability, extensibility, and
scalability that embodied in the static structure of the software system.
Software Requirement Specifications

The production of the requirements stage of the software development process is Software
Requirements Specifications (SRS) (also called a requirements document). This report
lays a foundation for software engineering activities and is constructing when entire
requirements are elicited and analyzed. SRS is a formal report, which acts as a
representation of software that enables the customers to review whether it (SRS) is
according to their requirements. Also, it comprises user requirements for a system as well as
detailed specifications of the system requirements.

The SRS is a specification for a specific software product, program, or set of applications
that perform particular functions in a specific environment. It serves several goals
depending on who is writing it. First, the SRS could be written by the client of a system.
Second, the SRS could be written by a developer of the system. The two methods create
entirely various situations and establish different purposes for the document altogether. The
first case, SRS, is used to define the needs and expectation of the users. The second case,
SRS, is written for various purposes and serves as a contract document between customer
and developer.

Characteristics of good SRS

Following are the features of a good SRS document:


1. Correctness: User review is used to provide the accuracy of requirements stated in the
SRS. SRS is said to be perfect if it covers all the needs that are truly expected from the
system.

2. Completeness: The SRS is complete if, and only if, it includes the following elements:

(1). All essential requirements, whether relating to functionality, performance, design,


constraints, attributes, or external interfaces.

(2). Definition of their responses of the software to all realizable classes of input data in all
available categories of situations.

Note: It is essential to specify the responses to both valid and invalid values.

(3). Full labels and references to all figures, tables, and diagrams in the SRS and definitions
of all terms and units of measure.

3. Consistency: The SRS is consistent if, and only if, no subset of individual requirements
described in its conflict. There are three types of possible conflict in the SRS:

(1). The specified characteristics of real-world objects may conflicts. For example,

(a) The format of an output report may be described in one requirement as tabular but in
another as textual.

(b) One condition may state that all lights shall be green while another states that all lights
shall be blue.

(2). There may be a reasonable or temporal conflict between the two specified actions. For
example,

(a) One requirement may determine that the program will add two inputs, and another may
determine that the program will multiply them.

(b) One condition may state that "A" must always follow "B," while other requires that "A
and B" co-occurs.

(3). Two or more requirements may define the same real-world object but use different
terms for that object. For example, a program's request for user input may be called a
"prompt" in one requirement's and a "cue" in another. The use of standard terminology and
descriptions promotes consistency.

4. Unambiguousness: SRS is unambiguous when every fixed requirement has only one
interpretation. This suggests that each element is uniquely interpreted. In case there is a
method used with multiple definitions, the requirements report should determine the
implications in the SRS so that it is clear and simple to understand.
5. Ranking for importance and stability: The SRS is ranked for importance and stability
if each requirement in it has an identifier to indicate either the significance or stability of
that particular requirement.

Typically, all requirements are not equally important. Some prerequisites may be essential,
especially for life-critical applications, while others may be desirable. Each element should
be identified to make these differences clear and explicit. Another way to rank requirements
is to distinguish classes of items as essential, conditional, and optional.

6. Modifiability: SRS should be made as modifiable as likely and should be capable of


quickly obtain changes to the system to some extent. Modifications should be perfectly
indexed and cross-referenced.

7. Verifiability: SRS is correct when the specified requirements can be verified with a cost-
effective system to check whether the final software meets those requirements. The
requirements are verified with the help of reviews.

8. Traceability: The SRS is traceable if the origin of each of the requirements is clear and
if it facilitates the referencing of each condition in future development or enhancement
documentation.

There are two types of Traceability:

1. Backward Traceability: This depends upon each requirement explicitly referencing its
source in earlier documents.

2. Forward Traceability: This depends upon each element in the SRS having a unique
name or reference number.

The forward traceability of the SRS is especially crucial when the software product enters
the operation and maintenance phase. As code and design document is modified, it is
necessary to be able to ascertain the complete set of requirements that may be concerned by
those modifications.

9. Design Independence: There should be an option to select from multiple design


alternatives for the final system. More specifically, the SRS should not contain any
implementation details.

10. Testability: An SRS should be written in such a method that it is simple to generate test
cases and test plans from the report.

11. Understandable by the customer: An end user may be an expert in his/her explicit
domain but might not be trained in computer science. Hence, the purpose of formal
notations and symbols should be avoided too as much extent as possible. The language
should be kept simple and clear.

12. The right level of abstraction: If the SRS is written for the requirements stage, the
details should be explained explicitly. Whereas,for a feasibility study, fewer analysis can be
used. Hence, the level of abstraction modifies according to the objective of the SRS.
Properties of a good SRS document

The essential properties of a good SRS document are the following:

Concise: The SRS report should be concise and at the same time, unambiguous, consistent,
and complete. Verbose and irrelevant descriptions decrease readability and also increase
error possibilities.

Structured: It should be well-structured. A well-structured document is simple to


understand and modify. In practice, the SRS document undergoes several revisions to cope
up with the user requirements. Often, user requirements evolve over a period of time.
Therefore, to make the modifications to the SRS document easy, it is vital to make the
report well-structured.

Black-box view: It should only define what the system should do and refrain from stating
how to do these. This means that the SRS document should define the external behavior of
the system and not discuss the implementation issues. The SRS report should view the
system to be developed as a black box and should define the externally visible behavior of
the system. For this reason, the SRS report is also known as the black-box specification of a
system.

Conceptual integrity: It should show conceptual integrity so that the reader can merely
understand it. Response to undesired events: It should characterize acceptable responses to
unwanted events. These are called system response to exceptional conditions.

Verifiable: All requirements of the system, as documented in the SRS document, should be
correct. This means that it should be possible to decide whether or not requirements have
been met in an implementation.

Requirements Analysis

Requirement analysis is significant and essential activity after elicitation. We analyze,


refine, and scrutinize the gathered requirements to make consistent and unambiguous
requirements. This activity reviews all requirements and may provide a graphical view of
the entire system. After the completion of the analysis, it is expected that the
understandability of the project may improve significantly. Here, we may also use the
interaction with the customer to clarify points of confusion and to understand which
requirements are more important than others.

The various steps of requirement analysis are shown in fig:


(i) Draw the context diagram: The context diagram is a simple model that defines the
boundaries and interfaces of the proposed systems with the external world. It identifies the
entities outside the proposed system that interact with the system. The context diagram of
student result management system is given below:

(ii) Development of a Prototype (optional): One effective way to find out what the
customer wants is to construct a prototype, something that looks and preferably acts as part
of the system they say they want.

We can use their feedback to modify the prototype until the customer is satisfied
continuously. Hence, the prototype helps the client to visualize the proposed system and
increase the understanding of the requirements. When developers and users are not sure
about some of the elements, a prototype may help both the parties to take a final decision.
Some projects are developed for the general market. In such cases, the prototype should be
shown to some representative sample of the population of potential purchasers. Even
though a person who tries out a prototype may not buy the final system, but their feedback
may allow us to make the product more attractive to others.

The prototype should be built quickly and at a relatively low cost. Hence it will always have
limitations and would not be acceptable in the final system. This is an optional activity.

(iii) Model the requirements: This process usually consists of various graphical
representations of the functions, data entities, external entities, and the relationships
between them. The graphical view may help to find incorrect, inconsistent, missing, and
superfluous requirements. Such models include the Data Flow diagram, Entity-Relationship
diagram, Data Dictionaries, State-transition diagrams, etc.

(iv) Finalise the requirements: After modeling the requirements, we will have a better
understanding of the system behavior. The inconsistencies and ambiguities have been
identified and corrected. The flow of data amongst various modules has been analyzed.
Elicitation and analyze activities have provided better insight into the system. Now we
finalize the analyzed requirements, and the next step is to document these requirements in a
prescribed format.

Data Flow Diagrams

A Data Flow Diagram (DFD) is a traditional visual representation of the information flows
within a system. A neat and clear DFD can depict the right amount of the system
requirement graphically. It can be manual, automated, or a combination of both.

It shows how data enters and leaves the system, what changes the information, and where
data is stored.

The objective of a DFD is to show the scope and boundaries of a system as a whole. It may
be used as a communication tool between a system analyst and any person who plays a part
in the order that acts as a starting point for redesigning a system. The DFD is also called as
a data flow graph or bubble chart.

The following observations about DFDs are essential:

1. All names should be unique. This makes it easier to refer to elements in the DFD.
2. Remember that DFD is not a flow chart. Arrows is a flow chart that represents the
order of events; arrows in DFD represents flowing data. A DFD does not involve any
order of events.
3. Suppress logical decisions. If we ever have the urge to draw a diamond-shaped box in
a DFD, suppress that urge! A diamond-shaped box is used in flow charts to
represents decision points with multiple exists paths of which the only one is taken.
This implies an ordering of events, which makes no sense in a DFD.
4. Do not become bogged down with details. Defer error conditions and error handling
until the end of the analysis.

Standard symbols for DFDs are derived from the electric circuit diagram analysis and are
shown in fig:

Circle: A circle (bubble) shows a process that transforms data inputs into data outputs.

Data Flow: A curved line shows the flow of data into or out of a process or data store.

Data Store: A set of parallel lines shows a place for the collection of data items. A data
store indicates that the data is stored which can be used at a later stage or by the other
processes in a different order. The data store can have an element or group of elements.

Source or Sink: Source or Sink is an external entity and acts as a source of system inputs
or sink of system outputs.

Levels in Data Flow Diagrams (DFD)

The DFD may be used to perform a system or software at any level of abstraction. Infact,
DFDs may be partitioned into levels that represent increasing information flow and
functional detail. Levels in DFD are numbered 0, 1, 2 or beyond. Here, we will see
primarily three levels in the data flow diagram, which are: 0-level DFD, 1-level DFD, and
2-level DFD.

0-level DFDM

It is also known as fundamental system model, or context diagram represents the entire
software requirement as a single bubble with input and output data denoted by incoming
and outgoing arrows. Then the system is decomposed and described as a DFD with multiple
bubbles. Parts of the system represented by each of these bubbles are then decomposed and
documented as more and more detailed DFDs. This process may be repeated at as many
levels as necessary until the program at hand is well understood. It is essential to preserve
the number of inputs and outputs between levels, this concept is called leveling by
DeMacro. Thus, if bubble "A" has two inputs x1 and x2 and one output y, then the expanded
DFD, that represents "A" should have exactly two external inputs and one external output as
shown in fig:

The Level-0 DFD, also called context diagram of the result management system is shown in
fig. As the bubbles are decomposed into less and less abstract bubbles, the corresponding
data flow may also be needed to be decomposed.
1-level DFD

In 1-level DFD, a context diagram is decomposed into multiple bubbles/processes. In this


level, we highlight the main objectives of the system and breakdown the high-level process
of 0-level DFD into subprocesses.
2-Level DFD

2-level DFD goes one process deeper into parts of 1-level DFD. It can be used to project or
record the specific/necessary detail about the system's functioning.
Data Dictionaries

A data dictionary is a file or a set of files that includes a database's metadata. The data
dictionary hold records about other objects in the database, such as data ownership, data
relationships to other objects, and other data. The data dictionary is an essential component
of any relational database. Ironically, because of its importance, it is invisible to most
database users. Typically, only database administrators interact with the data dictionary.

The data dictionary, in general, includes information about the following:

o Name of the data item


o Aliases
o Description/purpose
o Related data items
o Range of values
o Data structure definition/Forms

The name of the data item is self-explanatory.

Aliases include other names by which this data item is called DEO for Data Entry Operator
and DR for Deputy Registrar.

Description/purpose is a textual description of what the data item is used for or why it
exists.

Related data items capture relationships between data items e.g., total_marks must always
equal to internal_marks plus external_marks.
Range of values records all possible values, e.g. total marks must be positive and between
0 to 100.

Data structure Forms: Data flows capture the name of processes that generate or receive
the data items. If the data item is primitive, then data structure form captures the physical
structures of the data item. If the data is itself a data aggregate, then data structure form
capture the composition of the data items in terms of other data items.

The mathematical operators used within the data dictionary are defined in the table:

Notations Meaning

x=a+b x includes of data elements a and b.

x=[a/b] x includes of either data elements a or b.

x=a x includes of optimal data elements a.

x=y[a] x includes of y or more occurrences of data element a

x=[a]z x includes of z or fewer occurrences of data element a

x=y[a]z x includes of some occurrences of data element a which are between y and z.

Entity-Relationship Diagrams
ER-modeling is a data modeling method used in software engineering to produce a
conceptual data model of an information system. Diagrams created using this ER-modeling
method are called Entity-Relationship Diagrams or ER diagrams or ERDs.

Purpose of ERD

o The database analyst gains a better understanding of the data to be contained in the
database through the step of constructing the ERD.
o The ERD serves as a documentation tool.
o Finally, the ERD is used to connect the logical structure of the database to users. In
particular, the ERD effectively communicates the logic of the database to users.

Components of an ER Diagrams

1. Entity

An entity can be a real-world object, either animate or inanimate, that can be merely
identifiable. An entity is denoted as a rectangle in an ER diagram. For example, in a school
database, students, teachers, classes, and courses offered can be treated as entities. All these
entities have some attributes or properties that give them their identity.

Entity Set

An entity set is a collection of related types of entities. An entity set may include entities
with attribute sharing similar values. For example, a Student set may contain all the students
of a school; likewise, a Teacher set may include all the teachers of a school from all
faculties. Entity set need not be disjoint.
2. Attributes

Entities are denoted utilizing their properties, known as attributes. All attributes have
values. For example, a student entity may have name, class, and age as attributes.

There exists a domain or range of values that can be assigned to attributes. For example, a
student's name cannot be a numeric value. It has to be alphabetic. A student's age cannot be
negative, etc.

There are four types of Attributes:

1. Key attribute
2. Composite attribute
3. Single-valued attribute
4. Multi-valued attribute
5. Derived attribute

1. Key attribute: Key is an attribute or collection of attributes that uniquely identifies an


entity among the entity set. For example, the roll_number of a student makes him
identifiable among students.
There are mainly three types of keys:

1. Super key: A set of attributes that collectively identifies an entity in the entity set.
2. Candidate key: A minimal super key is known as a candidate key. An entity set may
have more than one candidate key.
3. Primary key: A primary key is one of the candidate keys chosen by the database
designer to uniquely identify the entity set.

2. Composite attribute: An attribute that is a combination of other attributes is called a


composite attribute. For example, In student entity, the student address is a composite
attribute as an address is composed of other characteristics such as pin code, state, country.
3. Single-valued attribute: Single-valued attribute contain a single value. For example,
Social_Security_Number.

4. Multi-valued Attribute: If an attribute can have more than one value, it is known as a
multi-valued attribute. Multi-valued attributes are depicted by the double ellipse. For
example, a person can have more than one phone number, email-address, etc.

5. Derived attribute: Derived attributes are the attribute that does not exist in the physical
database, but their values are derived from other attributes present in the database. For
example, age can be derived from date_of_birth. In the ER diagram, Derived attributes are
depicted by the dashed ellipse.
3. Relationships

The association among entities is known as relationship. Relationships are represented by


the diamond-shaped box. For example, an employee works_at a department, a student
enrolls in a course. Here, Works_at and Enrolls are called relationships.

Relationship set

A set of relationships of a similar type is known as a relationship set. Like entities, a


relationship too can have attributes. These attributes are called descriptive attributes.

Degree of a relationship set

The number of participating entities in a relationship describes the degree of the


relationship. The three most common relationships in E-R models are:

1. Unary (degree1)
2. Binary (degree2)
3. Ternary (degree3)

1. Unary relationship: This is also called recursive relationships. It is a relationship


between the instances of one entity type. For example, one person is married to only one
person.

2. Binary relationship: It is a relationship between the instances of two entity types. For
example, the Teacher teaches the subject.
3. Ternary relationship: It is a relationship amongst instances of three entity types. In fig,
the relationships "may have" provide the association of three entities, i.e., TEACHER,
STUDENT, and SUBJECT. All three entities are many-to-many participants. There may be
one or many participants in a ternary relationship.

In general, "n" entities can be related by the same relationship and is known as n-
ary relationship.

Cardinality

Cardinality describes the number of entities in one entity set, which can be associated with
the number of entities of other sets via relationship set.

Types of Cardinalities

1. One to One: One entity from entity set A can be contained with at most one entity of
entity set B and vice versa. Let us assume that each student has only one student ID, and
each student ID is assigned to only one person. So, the relationship will be one to one.

Using Sets, it can be represented as:


2. One to many: When a single instance of an entity is associated with more than one
instances of another entity then it is called one to many relationships. For example, a client
can place many orders; a order cannot be placed by many customers.

Using Sets, it can be represented as:

3. Many to One: More than one entity from entity set A can be associated with at most one
entity of entity set B, however an entity from entity set B can be associated with more than
one entity from entity set A. For example - many students can study in a single college, but
a student cannot study in many colleges at the same time.

Using Sets, it can be represented as:


4. Many to Many: One entity from A can be associated with more than one entity from B
and vice-versa. For example, the student can be assigned to many projects, and a project can
be assigned to many students.

Using Sets, it can be represented as:

Decision table is a brief visual representation for specifying which actions to perform
depending on given conditions. The information represented in decision tables can also be
represented as decision trees or in a programming language using if-then-else and switch-
case statements.
A decision table is a good way to settle with different combination inputs with their
corresponding outputs and also called cause-effect table. Reason to call cause-effect table
is a related logical diagramming technique called cause-effect graphing that is basically
used to obtain the decision table.
Importance of Decision Table:
1. Decision tables are very much helpful in test design technique.
2. It helps testers to search the effects of combinations of different inputs and other
software states that must correctly implement business rules.
3. It provides a regular way of stating complex business rules, that is helpful for
developers as well as for testers.
4. It assists in development process with developer to do a better job. Testing with
all combination might be impractical.
5. A decision table is basically an outstanding technique used in both testing and
requirements management.
6. It is a structured exercise to prepare requirements when dealing with complex
business rules.
7. It is also used in model complicated logic.

Decision Table in test designing:


Blank Decision Table
CONDITIONS STEP 1 STEP 2 STEP 3 STEP 4
Condition 1
Condition 2
Condition 3
Condition 4
Decision Table: Combinations
CONDITIONS STEP 1 STEP 2 STEP 3 STEP 4
Condition 1 Y Y N N
Condition 2 Y N Y N
Condition 3 Y N N Y
Condition 4 N Y Y N
Advantage of Decision Table:
1. Any complex business flow can be easily converted into the test scenarios & test
cases using this technique.
2. Decision tables work iteratively that means the table created at the first iteration
is used as input table for next tables. The iteration is done only if the initial table
is not satisfactory.
3. Simple to understand and everyone can use this method design the test scenarios
& test cases.
4. It provide complete coverage of test cases which help to reduce the rework on
writing test scenarios & test cases.
5. These tables guarantee that we consider every possible combination of condition
values. This is known as its completeness property.
For example: A bookstore get a trade discount of 25% for order more then 6 books; for
order from libraries and individuals, 5% allowed on orders of 6-19 copies per book title;
10% on orders for 20-49 copies per book title; 15% on orders for 50 copies or more per
book title.

Software Quality Assurance (SQA)

Software Quality Assurance (SQA) is simply a way to assure quality in the software. It
is the set of activities which ensure processes, procedures as well as standards suitable for
the project and implemented correctly.
Software Quality Assurance is a process which works parallel to development of a
software. It focuses on improving the process of development of software so that
problems can be prevented before they become a major issue. Software Quality Assurance
is a kind of an Umbrella activity that is applied throughout the software process.
Software Quality Assurance have:
1. A quality management approach
2. Formal technical reviews
3. Multi testing strategy
4. Effective software engineering technology
5. Measurement and reporting mechanism
Major Software Quality Assurance Activities:
1. SQA Management Plan:
Make a plan how you will carry out the sqa through out the project. Think which
set of software engineering activities are the best for project.check level of sqa
team skills.
2. Set The Check Points:
SQA team should set checkpoints. Evaluate the performance of the project on
the basis of collected data on different check points.
3. Multi testing Strategy:
Do not depend on single testing approach. When you have lot of testing
approaches available use them.
4. Measure Change Impact:
The changes for making the correction of an error sometimes re introduces more
errors keep the measure of impact of change on project. Reset the new change to
change check the compatibility of this fix with whole project.
5. Manage Good Relations:
In the working environment managing the good relation with other teams
involved in the project development is mandatory. Bad relation of sqa team with
programmers team will impact directly and badly on project. Don’t play politics.
Benefits of Software Quality Assurance (SQA):
1. SQA produce high quality software.
2. High quality application saves time and cost.
3. SQA is beneficial for better reliability.
4. SQA is beneficial in the condition of no maintenance for long time.
5. High quality commercial software increase market share of company.
6. Improving the process of creating software.
7. Improves the quality of the software.
Disadvantage of SQA:
There are a number of disadvantages of quality assurance. Some of them include adding
more resources, employing more workers to help maintain quality and so much more.
Software Quality Assurance Plan
• The software quality assurance plan comprises of the procedures, techniques, and
tools that are employed to make sure that a product or service aligns with the
requirements defined in the SRS(software requirement specification).
• The SQA plan document consists of the below sections:
1. Purpose section
2. Reference section
3. Software configuration management section
4. Problem reporting and corrective action section
5. Tools, technologies and methodologies section
6. Code control section
7. Records: Collection, maintenance and retention section
8. Testing methodology
SQA Techniques
• Auditing: Auditing involves inspection of the work products and its related
information to determine if the set of standard processes were followed or not.
• Reviewing: A meeting in which the software product is examined by both the
internal and external stakeholders to seek their comments and approval.
• Code Inspection: It is the most formal kind of review that does static testing to find
bugs and avoid defect growth in the later stages. It is done by a trained mediator/peer
and is based on rules, checklist, entry and exit criteria. The reviewer should not be the
author of the code.
• Design Inspection: Design inspection is done using a checklist that inspects the
below areas of software design:
• General requirements and design
• Functional and Interface specifications
• Conventions
• Requirement traceability
• Structures and interfaces
• Logic
• Performance
• Error handling and recovery
• Testability, extensibility
• Coupling and cohesion
• Simulation: Simulation is a tool that models the real-life situation in order to
virtually examine the behavior of the system under study.
• Functional Testing: It is a QA technique which verifies what the system does
without considering how it does. This type of black box testing mainly focuses on
testing the system specifications or features.
• Standardization: Standardization plays a crucial role in quality assurance. It
decreases the ambiguity and guesswork, thus ensuring quality.
• Static Analysis: It is a software analysis that is done by an automated tool without
actually executing the program. This technique is highly used for quality assurance in
medical, nuclear and aviation software. Software metrics and reverse engineering are
some popular forms of static analysis.
• Walkthroughs: Software walkthrough or code walkthrough is a kind of peer review
where the developer guides the members of the development team to go through the
product and raise queries, suggest alternatives, make comments regarding possible
errors, standard violations or any other issues.
• Path Testing: It is a white box testing techniques where the complete branch
coverage is ensured by executing each independent path at least once.
• Stress Testing: This type of testing is done to check how robust a system is by
testing it under heavy load i.e. beyond normal conditions.
• Six Sigma: Six Sigma is a quality assurance approach that aims at nearly perfect
products or services. It is widely applied in many fields including software. The main
objective of six sigma is process improvement so that the produced software is 99.76
% defect free.
SO 9000 Certification

ISO (International Standards Organization) is a group or consortium of 63 countries


established to plan and fosters standardization. ISO declared its 9000 series of standards in
1987. It serves as a reference for the contract between independent parties. The ISO 9000
standard determines the guidelines for maintaining a quality system. The ISO standard
mainly addresses operational methods and organizational methods such as responsibilities,
reporting, etc. ISO 9000 defines a set of guidelines for the production process and is not
directly concerned about the product itself.

Types of ISO 9000 Quality Standards


The ISO 9000 series of standards is based on the assumption that if a proper stage is
followed for production, then good quality products are bound to follow automatically. The
types of industries to which the various ISO standards apply are as follows.

1. ISO 9001: This standard applies to the organizations engaged in design,


development, production, and servicing of goods. This is the standard that applies to
most software development organizations.
2. ISO 9002: This standard applies to those organizations which do not design products
but are only involved in the production. Examples of these category industries
contain steel and car manufacturing industries that buy the product and plants designs
from external sources and are engaged in only manufacturing those products.
Therefore, ISO 9002 does not apply to software development organizations.
3. ISO 9003: This standard applies to organizations that are involved only in the
installation and testing of the products. For example, Gas companies.

How to get ISO 9000 Certification?

An organization determines to obtain ISO 9000 certification applies to ISO registrar office
for registration. The process consists of the following stages:
1. Application: Once an organization decided to go for ISO certification, it applies to
the registrar for registration.
2. Pre-Assessment: During this stage, the registrar makes a rough assessment of the
organization.
3. Document review and Adequacy of Audit: During this stage, the registrar reviews
the document submitted by the organization and suggest an improvement.
4. Compliance Audit: During this stage, the registrar checks whether the organization
has compiled the suggestion made by it during the review or not.
5. Registration: The Registrar awards the ISO certification after the successful
completion of all the phases.
6. Continued Inspection: The registrar continued to monitor the organization time by
time.

Software Engineering Institute Capability Maturity Model (SEICMM)

The Capability Maturity Model (CMM) is a procedure used to develop and refine an
organization's software development process.

The model defines a five-level evolutionary stage of increasingly organized and


consistently more mature processes.

CMM was developed and is promoted by the Software Engineering Institute (SEI), a
research and development center promote by the U.S. Department of Defense (DOD).

Capability Maturity Model is used as a benchmark to measure the maturity of an


organization's software process.
Methods of SEICMM

There are two methods of SEICMM:

Capability Evaluation: Capability evaluation provides a way to assess the software


process capability of an organization. The results of capability evaluation indicate the likely
contractor performance if the contractor is awarded a work. Therefore, the results of the
software process capability assessment can be used to select a contractor.

Software Process Assessment: Software process assessment is used by an organization to


improve its process capability. Thus, this type of evaluation is for purely internal use.

SEI CMM categorized software development industries into the following five maturity
levels. The various levels of SEI CMM have been designed so that it is easy for an
organization to build its quality system starting from scratch slowly.

Level 1: Initial
Ad hoc activities characterize a software development organization at this level. Very few
or no processes are described and followed. Since software production processes are not
limited, different engineers follow their process and as a result, development efforts become
chaotic. Therefore, it is also called a chaotic level.

Level 2: Repeatable

At this level, the fundamental project management practices like tracking cost and schedule
are established. Size and cost estimation methods, like function point analysis, COCOMO,
etc. are used.

Level 3: Defined

At this level, the methods for both management and development activities are defined and
documented. There is a common organization-wide understanding of operations, roles, and
responsibilities. The ways through defined, the process and product qualities are not
measured. ISO 9000 goals at achieving this level.

Level 4: Managed

At this level, the focus is on software metrics. Two kinds of metrics are composed.

Product metrics measure the features of the product being developed, such as its size,
reliability, time complexity, understandability, etc.

Process metrics follow the effectiveness of the process being used, such as average defect
correction time, productivity, the average number of defects found per hour inspection, the
average number of failures detected during testing per LOC, etc. The software process and
product quality are measured, and quantitative quality requirements for the product are met.
Various tools like Pareto charts, fishbone diagrams, etc. are used to measure the product
and process quality. The process metrics are used to analyze if a project performed
satisfactorily. Thus, the outcome of process measurements is used to calculate project
performance rather than improve the process.

Level 5: Optimizing

At this phase, process and product metrics are collected. Process and product measurement
data are evaluated for continuous process improvement.

Key Process Areas (KPA) of a software organization

Except for SEI CMM level 1, each maturity level is featured by several Key Process Areas
(KPAs) that contains the areas an organization should focus on improving its software
process to the next level. The focus of each level and the corresponding key process areas
are shown in the fig.
SEI CMM provides a series of key areas on which to focus to take an organization from one
level of maturity to the next. Thus, it provides a method for gradual quality improvement
over various stages. Each step has been carefully designed such that one step enhances the
capability already built up.

You might also like