You are on page 1of 13

ADIGRAT UNIVERSITY

College of Engineering and Technology


Department of Software Engineering
Course Title: Software Quality Assurance and Testing
Course Code: SENG4113
Group Assignment : JUnitTesting
Group Members
Name ID No
1. Bereketab Gaim 06068/10
2. Gebremedhin Tesfay 01706/09
3. Hiwet Gebrecherkos 06482/10
4. Huluf Birhane 06488/10
5. Kinfe Hailemariam 06543/10
6. Mearg Gebremedhin 06610/10
7. Shewit Gebrehawerya 06918/10
Submission Date: November 21,2023
Submitted To: Mr. Gebrehiwet
Assignment One

1. Discuss and briefly explain the different types of view on software quality according to
Kitchenham and Pfleeger’s article.
Kitchenham and Pfleeger proposed a comprehensive framework for understanding different
perspectives or views on software quality in their article titled "Software Quality: The Elusive Target."
According to their framework, there are six distinct views or dimensions of software quality.

1. Transcendental View:
The transcendental view focuses on the inherent nature of quality that cannot be precisely defined or
measured.
It considers software quality as an abstract concept that is recognized when seen or experienced. This
view emphasizes the subjective perception of quality and the role of individual judgment in assessing
software quality.

2. User View:
The user view centers on the satisfaction of the end-users or stakeholders. It defines quality based on
the extent to which the software meets the user's needs, expectations, and requirements. This view
emphasizes the importance of usability, functionality, and user experience in determining software
quality.

3. Manufacturing View:
The manufacturing view treats software development as a production process. It defines quality in
terms of conformance to predefined specifications, adherence to coding standards, and absence of
defects. This view focuses on objective measures, such as defect density, reliability metrics, and
adherence to coding guidelines.

4. Product View:
The product view considers quality as a set of measurable attributes or characteristics of the software
product. It involves assessing quality based on specific measurable criteria, such as reliability,
performance, maintainability, and security. This view emphasizes quantitative measures and objective
evaluation of software quality.

5. Value-Based View:
The value-based view relates quality to the value that the software delivers to the stakeholders or the
organization. It considers quality in terms of the benefits, cost-effectiveness, and competitive
advantage provided by the software. This view emphasizes the alignment of software quality with
business goals and the overall value proposition.

6. User-Transcendent View:
The user-transcendent view combines elements of the transcendental and user views. It recognizes
that software quality involves both intrinsic aspects and meeting user expectations. It acknowledges
the importance of subjective judgment while also considering objective measures and user
satisfaction.

These six views represent different perspectives on software quality, each emphasizing different
aspects and considerations. They highlight the multidimensional nature of software quality and the
need for a comprehensive approach that incorporates various viewpoints. By understanding these
views, software practitioners can adopt a holistic approach to software quality and address the
diverse needs and expectations of stakeholders.

In Kitchenham and Pfleeger’s article the six types of view on software quality can be describe shortly
as:
 Transcendental View: hard to define, but recognized
 User View: fitness for purpose or meet user’s needs
 Manufacturing View: conform to process standards
 Product View: inherent product characteristics
 Value-based View: customers willingness to pay
 User-Transcendent View: behaves both elements of user and transcendent view

2. What is software quality assurance and what are the key components and activities of
software quality assurance.
Software Quality Assurance (SQA) is a set of activities for ensuring quality in software engineering
processes. It ensures that developed software meets and complies with the defined or standardized quality
specifications. SQA is an ongoing process within the Software Development Life Cycle (SDLC) that routinely
checks the developed software to ensure it meets the desired quality measures.
SQA practices are implemented in most types of software development, regardless of the underlying
software development model being used. SQA incorporates and implements software testing methodologies
to test the software. Rather than checking for quality after completion, SQA processes test for quality in each
phase of development, until the software is complete. With SQA, the software development process moves
into the next phase only once the current/previous phase complies with the required quality standards.
Software quality assurance is a critical part of a successful software development process. The more
intensive quality assurance, the better off your business will be in a long run.

 Key components and activities of Software Quality Assurance include:


1. Quality Planning: Developing a plan that outlines the processes, standards, and methodologies to be used
throughout the software development life cycle. Establishing criteria for quality, defining quality objectives,
and determining the resources required.
2. Process and Product Standards: Establishing and implementing process standards that define how the
software development process should be conducted. Developing product standards that define the
characteristics and features the software product should possess.

3. Quality Assurance Audits: Conducting periodic audits to ensure compliance with established processes
and standards. Identifying areas of non-compliance and recommending corrective actions.

4. Reviews and Inspections: Conducting systematic reviews and inspections of project documentation,
code, and other artifacts to identify and address issues early in the development process.

5. Testing: Planning and executing testing activities to validate that the software meets specified
requirements. Conducting different types of testing, such as unit testing, integration testing, system testing,
and acceptance testing.

6. Configuration Management: Managing changes to the software configuration systematically.


Ensuring that changes are properly documented, tracked, and tested to maintain the integrity of the
software.
7. Training and Education: Providing training and education to the development team on quality
standards, processes, and methodologies. Ensuring that team members are equipped with the
necessary skills to contribute to the development of high-quality software.
8. Metrics and Measurement: Establishing and monitoring metrics to measure the effectiveness of
the software development processes. Analyzing metrics to identify areas for improvement and
making data-driven decisions.
9. Continuous Improvement: Identifying opportunities for process improvement based on feedback,
audits, and performance metrics. Implementing corrective and preventive actions to enhance the
overall quality of the software development process.
10. Documentation: Ensuring that all processes, standards, and procedures are well-documented.
Providing clear and comprehensive documentation to aid in the understanding and execution of
quality-related activities.

 Software Quality Assurance Activities

Below given are some of the activities of Software Quality Assurance.

1. Setting the Checkpoint

SQA team sets the checkpoints after specific time intervals in order to check the progress, quality, performance
of software, and whether the software quality work is done on time as per the schedule and documents.

2. Measure Change Impact


For a defect reported by QA and fixed by the developer, it is very important to retest the defect fix and to
verify whether the fixed defect does not introduce new defects in the working software. For this, test metrics
are maintained and observed by managers and developers to check for newly generated defects by the
introduction of new functionality or the fix of any defect.

3. Having Multiple Testing Strategy

One should not rely on a single testing approach and strategy for testing software. Multiple testing strategies
should be implemented in software so as to test it from different angles and cover all the areas. For an e-
commerce website, security testing, performance testing, load testing, database testing all should be done to
ensure a better quality of software.

4. Maintaining Records and Reports

It is important to keep all the records and documents of the QA and share them on time to time to stakeholders.
Test cases executed, test cycles, defects logged, defects fixed, test cases created, change in requirements from a
client for a specific test case, all should be properly documented for future reference.

5. Managing Good Relations

Managing good relations between the testers and developers plays an important role in the project. As the role
of developer and tester contradict each other but this should not be taken on a personal level. The main aim of
both teams should be the delivery of good quality projects with minimum risks of failure.

6. SQA Management Plan

This includes finding ways how the SQA will work in the new project in the most effective way. Think of
SQA strategies, software engineering processes that could be implemented as per the project requirements, and
the individual skills of team members.

3. What are the causes of software errors and what is the difference between software errors,
faults and failures.
There are nine main causes of software errors
1. Faulty Requirements Definition
 Usually considered the root cause of software errors
 Incorrect requirement definitions
• Simply stated, ‘wrong’ definitions (formulas, etc.)
 Incomplete definitions
• Unclear or implied requirements
 Missing requirements
• Just flat-out 'missing'. (e.g., program element code)
 Inclusion of unneeded requirements
• (many projects have gone amuck for including far too many requirements that will never be
used.
• Impacts budgets, complexity, development time, …
2. Client-developer communication failures
 Misunderstanding of instructions in requirements documentation (written / graphical
instructions)
 Misunderstanding of written changes during development.
 Misunderstanding of oral changes during development.
 Lack of attention
 to client messages by developers dealing with requirement changes and
 to client responses by clients to developer questions
 Very often, these very talented individuals come from different planets, it seems.
 Clients represent the users: developers represent a different mindset entirely sometimes!

3. Deliberate deviations from software requirements


 Developer reuses previous / similar work to save time.
 Due to time or budget pressures, the developer decides to omit part of the required functions in
an attempt to cope with these pressures.
 Often reused code needs modification which it may contain unneeded / unusable extraneous
code.
 Developer-initiated, unapproved improvements to the software, introduced without the client’s
approval, frequently disregard requirements that seem minor to the developer. Such “minor”
changes may, eventually, cause software errors.

4. Logical design errors


Software errors can enter the system when the professionals who design the system – systems
architects, software engineers, analysts, etc. – formulate the software requirements. Typical errors
include:
 Definitions that represent software requirements by means of erroneous algorithms.
 Process definitions that contain sequencing errors.
 Erroneous definition of boundary conditions.

5. Coding errors A broad range of reasons cause programmers to make coding errors. These include
misunderstanding the design documentation, linguistic errors in the programming languages, errors in
the application of CASE and other development tools, errors in data selection, and so forth.

6. Shortcomings of the testing process


 Failure to promptly correct detected software faults as a result of inappropriate indications of
the reasons for the fault.
 Incomplete correction of detected errors due to negligence or time pressures.
 Failures to document and report detected errors and faults.
 Incomplete test plans leave untreated portions of the software or the application functions and
states of the system.
7. Procedure errors Procedures direct the user with respect to the activities required at each step of the
process. They are of special importance in complex software systems where the processing is
conducted in several steps, each of which may feed a variety of types of data and allow for
examination of the intermediate results.

8. Documentation Errors
 Errors in the design documents
 If Docs do not represent the implemented design, this is trouble for subsequent redesign
and reuse
 Errors in the documentation in the User Manuals, Operators Manual, other manuals
(Installation…)
 Errors in on-line help, if available.
 Listing of non-existing software functions
 Planned early but dropped; remain in documentation!
 Many error messages are totally meaningless
 The difference between software errors, faults and failures
Comparison Error Fault Failure
basis
Definition An Error is a mistake made in The Fault is a state that If the software has lots
the code; that's why we causes the software to fail of defects, it leads to
cannot execute or compile to accomplish its essential failure or causes
code. function. failure.

Raised by The Developers and Human mistakes cause The failure finds by the
automation test engineers fault. manual test engineer
raise the error. through the
development cycle.

Types
---
Different type of Error is as Different type of Fault are
below: as follows:
 Syntactic Error  Business Logic
 User interface error Faults
 Flow control error  Functional and
 Error handling error Logical Faults
 Calculation error  Faulty GUI
 Hardware error  Performance
 Testing Error Faults
 Security Faults
 Software/
hardware fault

4. What is the difference between static analysis and dynamic analysis

Static analysis and dynamic analysis are two different approaches used in various fields, including
engineering, software development, and financial modeling. Here's a comparison of the two:

Static Analysis:

1. Definition: Static analysis is a technique that analyzes the behavior of a system or structure without
considering the effects of time or dynamic forces.
2. Time Independence: Static analysis assumes that the system is in equilibrium and does not take into
account the time-dependent behavior or the effects of inertia.
3. Load Conditions: Static analysis typically considers a specific set of applied loads or forces that are
constant and do not change over time.
4. Results: It provides information about the response of a structure or system under steady-state
conditions, such as stress distribution, deformation, and stability.
5. Applications: Static analysis is commonly used for structural analysis, determining material properties,
evaluating design feasibility, and assessing safety factors.

Dynamic Analysis:

1. Definition: Dynamic analysis is a technique that simulates the behavior of a system or structure over
time, considering the effects of time-varying forces and dynamic phenomena.
2. Time Dependency: Dynamic analysis takes into account the time-dependent behavior of the system,
including inertial forces, acceleration, and motion.
3. Load Conditions: Dynamic analysis considers time-varying loads, such as seismic forces, wind gusts,
impact loads, or any force that changes over time.
4. Results: It provides information about the dynamic response of a structure, including natural
frequencies, mode shapes, vibration amplitudes, and transient behavior.
5. Applications: Dynamic analysis is commonly used for evaluating the response of structures to dynamic
loads, designing earthquake-resistant structures, analyzing vibrations in mechanical systems, and
studying fluid dynamics.

In summary, static analysis focuses on the behavior of a system under steady-state conditions and assumes
time independence, while dynamic analysis considers the time-varying behavior of a system and analyzes its
response to dynamic forces and phenomena. Both approaches have their own specific applications and are used
to address different aspects of system behavior.

5. Briefly explain the difference between verification and validation.

Verification and validation are two distinct processes used in various fields, such as engineering, software
development, and quality management.

Verification:
Verification is a process of evaluating a system, product, or component to determine if it meets specified
requirements and complies with established standards or guidelines. It focuses on ensuring that the system has
been built or implemented correctly. In verification, the emphasis is on checking the accuracy, completeness,
and correctness of the system with respect to the defined specifications. It involves activities such as reviews,
inspections, and testing to confirm that the system has been developed according to the intended design and
requirements.

Validation:
Validation is a process of evaluating a system, product, or component during or at the end of the
development process to determine if it satisfies the intended use and the needs of the stakeholders. It focuses
on assessing the system's effectiveness, suitability, and fitness for its intended purpose. Validation involves
activities such as user testing, simulations, and demonstrations to confirm that the system meets the user's
expectations and fulfills the intended requirements. The goal of validation is to ensure that the system solves
the right problem and provides the desired outcomes.
The ‘verification’ process establishes the correspondence of an implementation phase of the software
development process with its specification, whereas ‘validation’ establishes the correspondence between a
system and users’ expectations.

 Verification: Answers the question “is the system developed in a right way?”
 Validation: Answers the question “is it developed the right system?”

6. Define and explain the following testing types.

a) Unit Testing
b) Regression Testing
c) Integration Testing
d) System Testing
a) Unit Testing
Unit testing is a software development process in which the smallest testable parts of an
application, called units, are individually scrutinized for proper operation. Software developers and
sometimes QA staff complete unit tests during the development process. The main objective of unit
testing is to isolate written code to test and determine if it works as intended.

Unit testing is a component of test-driven development (TDD), a pragmatic methodology that takes a
meticulous approach to building a product by means of continual testing and revision.

Unit testing is a crucial part of the software development process, as it helps ensure the reliability and
correctness of individual units, which contributes to the overall quality of the software.

Unit testing can be classified in two phases:

• Static unit testing: In static unit testing, code is reviewed by applying techniques commonly known
as inspection and walkthrough.

• Dynamic unit testing: Execution-based unit testing is referred to as dynamic unit testing. In this
testing, a program unit is actually executed in isolation.

Relationship between Static and Dynamic Testing in Unit Testing:

Prevention vs. Detection: Static testing is more preventive in nature, aiming to find issues early in the
development process before code execution. Dynamic testing is more about detecting issues through
the execution of code.

Complementary Roles: Both static and dynamic testing play crucial roles in ensuring software quality.
Static testing helps catch issues before code execution, while dynamic testing verifies the actual
behavior during execution.

b) Regression Testing
Regression Testing is a type of testing in the software development cycle that runs after every change to
ensure that the change introduces no unintended breaks. Regression testing addresses a common issue
that developers face the emergence of old bugs with the introduction of new changes.

Regression testing is a type of software testing that aims to ensure that changes or additions to a
software application do not negatively affect existing functionalities. It involves rerunning previously
executed test cases to verify that the existing features still work as intended after modifications or new
features have been introduced.

The main idea in regression testing is to verify that no defect has been introduced into the unchanged
portion of a system due to changes made elsewhere in the system.

c) Integration Testing

Integration testing is a type of software testing where components of the software are gradually
integrated and then tested as a unified group. Usually, these components are already working well
individually, but they may break when integrated with other components. With integration testing,
testers want to find defects that surface due to code conflicts between software modules when they
are integrated with each other.

Integration testing is a software testing technique in which individual components or modules of a


software application are combined and tested as a group. The goal of integration testing is to verify that
the integrated components work together as intended, ensuring that they properly communicate,
exchange data, and collaborate within the larger system.

 common approaches to performing system integration

i. Big Bang Integration Testing

Big Bang Integration testing is an integration testing approach in which all modules are integrated
and tested at once, as a singular entity. The integration process is not carried out until all components
have been successfully unit tested.

Advantages

 Suitable for simple and small-sized system with low level of dependency among
software components
 Little to no planning beforehand required
 Easy to set up since all modules are integrated simultaneously

Disadvantages

 Costly and time-consuming for large systems


 Waiting for all modules to be developed before testing
 Hard to isolate and pinpoint bugs in specific modules
 Hard to debug due to the complexity of multiple integrated modules

ii. Bottom-Up Integration Testing


 Perform testing for low-level components first, then gradually move to higher-level
components.
 With the bottom-up approach, testers start with individual modules at the lowest level,
then gradually moving to higher level modules, hence the term “bottom-up”.

iii. Top-down Integration Testing

 Perform testing for high-level components first, then gradually move to lower-level
components
 With the top-down approach, testers start with the highest-level modules, then gradually
move to lower-level modules, hence the term “top-down”.

iv. Sandwich (Hybrid) Integration Testing

Sandwich Testing (also known as Hybrid Integration Testing) is an approach in which testers
employ both top-down and bottom-up testing simultaneously.

Advantages:

 QA teams can tailor their integration testing activities based on project requirements,
combining the strengths of different methods.
 More flexibility in terms of using resources
 Ensuring that both the low-level and high-level software components are verified at the
same time

Disadvantages:

 Complex and require careful planning and coordination to decide which modules to test
using each method
 Effective communication among team members is crucial to ensure consistency and proper
tracking of issues
 Teams may find it challenging to switch between different integration strategies

v. Incremental Integration Testing


 The system is built, tested, and delivered in small increments or builds.
 New components are integrated and tested incrementally.
 It allows for gradual testing and validation of the evolving system.
d) System Testing

System testing is a level of testing that validates a complete and fully integrated software product. The
purpose of a system test is to evaluate the end-to-end system specifications. Usually, the software is only
one element of a larger computer-based system. Ultimately, the software is interfaced with other
software and hardware systems. System testing is defined as a series of different tests whose sole
purpose is to exercise the full computer-based system.

System testing is a level of software testing where a complete and integrated software system is
evaluated to ensure that it meets the specified requirements. It is performed after unit testing and
integration testing and before acceptance testing. The purpose of system testing is to verify that the
entire system functions as intended and to identify any defects or issues before the software is deployed
to the end-users.

the taxonomy of system tests

o Reliability tests measure the ability of the system to keep operating for a long time without
developing failures.
o Basic tests provide evidence that the system can be installed, configured, and brought to an
operational state.
o Documentation tests ensure that the system’s user guides are accurate and usable.
o Load and stability tests provide evidence that the system remains stable for a long period of time
under full load.
o Regulatory tests ensure that the system meets the requirements of government regulatory bodies in
the countries where it will be deployed
o Interoperability tests determine whether the system can interoperate with other thirdparty products.
o Robustness tests determine how well the system recovers from various input errors and other failure
situations.
o Stress tests put a system under stress in order to determine the limitations of a system and, when it
fails, to determine the manner in which the failure occurs.
o Performance tests measure the performance characteristics of the system, for example, throughput and
response time, under various conditions.
o Scalability tests determine the scaling limits of the system in terms of user scaling, geographic
scaling, and resource scaling.
o Regression tests determine that the system remains stable as it cycles through the integration of other
subsystems and through maintenance tasks.
o Functionality tests provide comprehensive testing over the full range of the requirements within the
capabilities of the system.

You might also like