You are on page 1of 8

Chapter 8: Testing

Your name:
Bùi Anh Trung
Hồ Nguyễn Minh Khoa
Hồ Nguyễn Anh Tuấn
Tăng Viết Phúc

Answer all questions. 1 mark per question

1. What is the distinction between validation and verification?

-Verification: Verification answers the question, "Are we building the product


right?" It involves ensuring that the software system or product is being
developed according to its specifications and requirements. In other words,
verification focuses on checking whether the product is being built correctly.
This process typically involves reviews, inspections, walkthroughs, and testing
of various types such as unit testing, integration testing, and system testing.
-Validation: Validation, on the other hand, answers the question, "Are we
building the right product?" It involves ensuring that the software system or
product meets the customer's needs and expectations. Validation focuses on
verifying that the end product satisfies the intended use and will provide value
to the customer. This process often involves user acceptance testing (UAT), beta
testing, and other methods to gather feedback from stakeholders and end-users.

2. What are the advantages of inspections over testing?


-Early detection of defects: Inspections typically occur earlier in the
development process than testing. By reviewing documents, code, or other
artifacts before they are fully developed, defects can be identified and corrected
at a stage where they are less costly and time-consuming to fix. In contrast,
testing often occurs after the product has been developed, which may result in
defects being found later in the process.
-Higher defect detection rate: Inspections are structured reviews conducted by a
team of individuals, including peers and stakeholders. This collaborative
approach often leads to a higher detection rate of defects compared to testing
alone. Different perspectives and expertise contribute to identifying issues that
may not be apparent to individual testers.
-Focused on prevention: Inspections are proactive in nature and focus on
preventing defects from occurring rather than just identifying them after they
have occurred. By analyzing documents, code, or designs in detail, potential
problems can be identified and corrected before they manifest into actual
defects in the product.
-Improvement of team communication and knowledge sharing: Inspections
provide an opportunity for team members to collaborate and share knowledge.
Through discussions and reviews, team members gain a deeper understanding
of the project, which can lead to improved communication, knowledge transfer,
and skill development within the team.
-Cost-effectiveness: While inspections may require more upfront time and
effort compared to testing, they often result in cost savings in the long run. By
identifying and fixing defects early in the development process, the overall cost
of rework and maintenance is reduced, resulting in a more efficient and cost-
effective development lifecycle.

3. Briefly describe the three principal stages of testing for a commercial


software system
-Unit Testing: At this stage, the smallest units of source code are tested
independently to ensure their correctness. Unit testing is typically conducted by
developers and is an essential part of the software development process.
-Integration Testing: At this stage, system components are combined and tested
to ensure their interaction and integration. The goal is to test the interfaces
between components and determine if they function as expected.
-System Testing: At this stage, the entire system is tested as a complete unit to
ensure it meets functional and non-functional requirements. System testing is
often conducted in an environment similar to the production environment and
may involve comprehensive test scenarios to ensure the stability and reliability
of the system."

4. What tests should be included in object class testing?

-Constructor Testing: Test the constructors of the class to ensure that objects are
initialized correctly and that default values are set appropriately.
-Method Testing: Test each method of the class with various inputs to ensure
that they produce the expected outputs and handle edge cases correctly. This
includes testing boundary conditions, invalid inputs, and corner cases.
-State Testing: Test the state of the object after invoking methods or performing
operations on it. Verify that the object's attributes are updated correctly and that
the object remains in a valid state throughout its lifecycle.
-Inheritance Testing: If the class inherits from a parent class or implements
interfaces, test its behavior in relation to inheritance and interface
implementation. Verify that it correctly overrides or extends inherited
functionality.
-Exception Handling Testing: Test how the class handles exceptions and error
conditions. Ensure that exceptions are caught and handled appropriately, and
that error messages are informative and helpful to the user.

5. What guidelines does Whittaker suggest for defect testing?


-Test Inputs First: Focus on testing inputs rather than outputs. By testing inputs,
you can uncover defects related to data validation, boundary conditions, and
unexpected user inputs.
-Test Data Comprehensively: Ensure that your test data covers a wide range of
scenarios, including valid inputs, invalid inputs, boundary conditions, and
exceptional cases. This helps uncover defects that may occur under different
conditions.
-Test Beyond the Specified Requirements: Don't limit your testing to the
documented requirements. Explore beyond the specified functionality to uncover
defects that may arise from unforeseen interactions or usage scenarios.
-Use Negative Testing: Test for error conditions, exceptional cases, and invalid
inputs to uncover defects related to error handling, boundary conditions, and
unexpected behavior.
- Test Error Handling Mechanisms: Specifically test how the software handles
errors, exceptions, and unexpected conditions. Verify that error messages are
informative and helpful to users and that the software gracefully recovers from
errors.
-Test Interfaces Extensively: Pay special attention to testing interfaces between
different components, modules, or systems. Verify that data is passed correctly,
parameters are validated, and communication protocols are followed.

6. What is an equivalence partition? Give an example.

An equivalence partition, also known as equivalence class or equivalence set,


is a testing technique used to divide the input domain of a software system into
groups of equivalent or similar inputs. The idea behind equivalence
partitioning is to reduce the number of test cases while still ensuring adequate
test coverage.
Ex
Consider a software system that requires a user to input their age, which must
be between 18 and 65 years old. Here's how you might apply equivalence
partitioning to this scenario:
Equivalence Partition 1: Age < 18 (Invalid inputs) Test cases: 16, 17
Equivalence Partition 2: 18 ≤ Age ≤ 65 (Valid inputs) Test cases: 18, 30, 50, 65
Equivalence Partition 3: Age > 65 (Invalid inputs) Test cases: 66, 70

7. What are the three important classes of interface errors?

-Parameter Interface Errors: These errors occur when parameters passed between
different modules, components, or systems are incorrect, inconsistent, or
misinterpreted. Parameter interface errors can lead to incorrect data processing,
unexpected behavior, or system crashes. Examples include passing the wrong data
type, using incorrect parameter values, or misinterpreting parameter semantics.
-Procedural Interface Errors: These errors occur when the sequence of interactions
between different modules, components, or systems is incorrect or not properly
coordinated. Procedural interface errors can result in race conditions, deadlocks,
or synchronization issues, leading to unpredictable system behavior or
performance degradation. Examples include improper sequencing of function
calls, missing or incorrect synchronization primitives, or incorrect handling of
shared resources.
-Semantic Interface Errors: These errors occur when the meaning or semantics of
the interface between different modules, components, or systems is misunderstood
or misinterpreted. Semantic interface errors can lead to inconsistencies in data
interpretation, incorrect assumptions, or mismatches between expected and actual
behavior. Examples include using incompatible data formats, misinterpreting data
semantics, or failing to adhere to interface specifications.
8. What should be the principal concerns of system testing?
System testing is a crucial phase in the software testing process, focusing on
testing the entire integrated system as a whole. The principal concerns of system
testing revolve around ensuring that the software system meets its specified
requirements, functions correctly, and performs reliably in its intended
environment: Functional Correctness, System Integration, Performance and
Scalability,….

9. Briefly summarize the test-driven development process

Test-driven development (TDD) is a software development approach where


tests are written before the actual code is implemented. The process typically
involves the following steps:
-Write a Test
-Run the Test (and Fail)
-Write the Code
-Run the Test (and Pass)
-Refactor
-Repeat

10. What is scenario testing?


Scenario testing is a software testing technique that involves testing the software
application by simulating real-world scenarios or user interactions. Instead of
focusing on individual functions or features in isolation, scenario testing
evaluates how the system behaves and responds to various sequences of user
actions or events within a specific context or scenario.
11. What is stress testing and why is it useful?

Stress testing is a software testing technique used to evaluate the stability and
performance of a system or application under extreme conditions beyond
normal operational limits. The purpose of stress testing is to identify the
breaking point or failure threshold of the software and understand how it
behaves under heavy loads, adverse conditions, or resource constraints.

12. What are the three types of user testing?

-Alpha Testing: Alpha testing involves testing the software product in a


controlled environment by a select group of end-users or internal testers before it
is released to the broader user base. The purpose of alpha testing is to identify
defects, gather initial feedback, and ensure that the software meets basic
requirements and functions correctly. Feedback from alpha testing is used to
make improvements and refinements before the software enters beta testing or
release.
-Beta Testing: Beta testing involves releasing the software product to a larger
group of external users or beta testers who represent the target audience. Beta
testers use the software in real-world scenarios and provide feedback on
usability, functionality, performance, and any issues encountered. Beta testing
helps identify bugs, usability problems, and areas for improvement from a
diverse set of perspectives before the final release to the general public.
-Usability Testing: Usability testing focuses on evaluating the ease of use and
user experience of the software product. It involves observing users as they
interact with the software to perform specific tasks or achieve goals. Usability
testing identifies usability issues, navigation problems, and areas of confusion,
allowing designers and developers to make informed decisions to improve the
user interface and overall user experience.

You might also like