You are on page 1of 21

CP5005 SOFTWARE QUALITY ASSURANCE AND TESTING

UNIT I
PART-A
1. What is quality?

 Quality is about making organizations perform for their stakeholders – from improving
products, services, systems and processes, to making sure that the whole organization is
fit and effective.
 Managing quality means constantly pursuing excellence: making sure that what your
organization does is fit for purpose, and not only stays that way, but keeps improving.
 EXAMPLE
1.Usable
 A quality website is usable. 
2.Maintainable
 Quality bicycles are easy to maintain. 
3. Reusable
 Quality packaging is reusable. 
4.Robust
 A quality system is robust. 
5.Defect (Bug) Free
 Quality software is bug free, usable and reliable. 

2. What is the effective quality process must focus on?


The effective quality process must focus on
• Paying much attention to customer’s requirements
• Making efforts to continuously improve quality
• Integrating measurement processes with product design and development
• Pushing the quality concept down to the lowest level of the organization
• Developing a system-level perspective with an emphasis on methodology and process
• Eliminating waste through continuous improvement

3. What are the key elements of TQC management?


The key elements of TQC management
• Quality comes first, not short-term profits.
• The customer comes first, not the producer.
• Decisions are based on facts and data.
• Management is participatory and respectful of all employees.
• Management is driven by cross-functional committees covering product planning, product
design, purchasing, manufacturing, sales, marketing, and distribution.

4. What is Verification?
Verification: Are we building the system right?
Verification. The evaluation of whether or not a product, service, or system complies
with a regulation, requirement, specification, or imposed condition. It is often an internal
process. Contrast with validation.
 EXAMPLE
1. Walkthrough
2. Inspection
3. Review

5. What is Validation?
Validation: Are we building the right system?
Validation. The assurance that a product, service, or system meets the needs of the
customer and other identified stakeholders. It often involves acceptance and suitability with
external customers.
 EXAMPLE
1. Testing
2. End Users

6. Differentiate Verification and Validation?

             Verification              Validation
1. Verification is a static practice of 1. Validation is a dynamic mechanism of
verifying documents, design, code and validating and testing the actual product.
program.
2. It does not involve executing the code. 2. It always involves executing the code.
3. It is human based checking of documents 3. It is computer based execution of program.
and files.
4. Verification uses methods like inspections, 4. Validation uses methods like black box
reviews, walkthroughs, and Desk-checking (functional)  testing, gray box testing, and
etc. white box (structural) testing etc.

5. Verification is to check whether the 5. Validation is to check whether software


software conforms to specifications. meets the customer expectations and
requirements.

6. It can catch errors that validation cannot 6. It can catch errors that verification cannot
catch. It is low level exercise. catch. It is High Level Exercise.

7. Target is requirements specification, 7. Target is actual product-a unit, a module, a


application and software architecture, high bent of integrated modules, and effective final
level, complete design, and database design product.
etc.
8. Verification is done by QA team to ensure 8. Validation is carried out with the
that the software is as per the specifications involvement of testing team.
in the SRS document.
9. It generally comes first-done before 9. It generally follows after verification.
validation.

7. Define Failure, Error, Fault.


• Failure: A failure is said to occur whenever the external behavior of a system does not conform
to that prescribed in the system specification.

• Error: An error is a state of the system. In the absence of any corrective action by the system,
an error state could lead to a failure which would not be attributed to any event subsequent to the
error.

• Fault: A fault is the adjudged cause of an error.

8. What are the different perspectives of a test process?


• It does work
• It does not work
• Reduce the risk of failure
• Reduce the cost of testing

9. What are the different activities in program testing?


The different activities in program testing are
• Identify an objective to be tested
• Select inputs
• Compute the expected outcome
• Set up the execution environment of the program
•Execute the program
• Analyze the test result

10. What are the different activities in test case selection?


The different activities in test case selection are
• Requirements and functional specifications
• Source code
• Input and output domains
• Operational profile
• Fault model

11. Define White Box and Black Box Testing


White-box testing techniques are also called structural testing techniques, whereas black-
box testing techniques are called functional testing techniques.
In structural testing, one primarily examines source code with a focus on control flow and
data flow. Control flow refers to flow of control from one instruction to another. Data flow refers
to the propagation of values from one variable or constant to another variable.
In functional testing, one does not have access to the internal details of a program and the
program is treated as a black box. A test engineer is concerned only with the part that is
accessible outside the program, that is, just the input and the externally visible outcome.

11. Define TDD


In test-driven development, programmers design and implement test cases before the
production code is written. This approach is a key practice in modern agile software
development processes such as XP

12. What are the main characteristics of agile software development processes?
The main characteristics of agile software development processes are
(i) incremental development
(ii) coding of unit and acceptance tests conducted by the programmers along with
customers
(iii) frequent regression testing
(iv) writing test code, one test case at a time, before the production code.

13. What are the benefits of test automation?


Test automation provides an opportunity to improve the skills of the test engineers
by writing programs, and hence their morale. They will be more focused on developing
automated test cases to avoid being a bottleneck in product delivery to the market. Consequently,
software testing becomes less of a tedious job.
The benefits of test automation are,
• Increased productivity of the testers
• Better coverage of regression testing
• Reduced durations of the testing phases
• Reduced cost of software maintenance
• Increased effectiveness of test cases

14. What are the structures of test group?


15. Define IEEE standard 610-12-1990 quality assurance
The IEEE standard 610-12-1990 quality assurance is defined as follows:

1. A planned and systematic pattern of all actions necessary to provide adequate confidence that
an item or product conforms to established technical requirements
2. A set of activities designed to evaluate the process by which products are developed or
manufactured.

16. What is Quality Control?

Quality control is another term that is often used in the literature. Quality control is
defined in the IEEE standard 610 as a set of activities designed to evaluate the quality of
developed or manufactured products. The term is used in a production or hardware
manufacturing environment, where a large number of physical items are produced and shipped.
Each of the items has to go through a testing process to ensure that the quality of the product is
good enough for shipment; otherwise, the item is rejected.

17. What are the technical responsibilities of a test manager?

The technical responsibilities of a test manager include (i) developing a test plan,
executing a test plan, and handling crises; (ii) developing, following, and improving test
processes; and (iii) defining, collecting, and analyzing metrics.

18. What is Team Building?

Building a team attitude means managing the engineers in a way that fosters teamwork
instead of individual gains. It involves ongoing effort and practice among all the team members,
including the team manager.

19. What is the role of Technical Leaders?


A technical leader often needs to coordinate the tasks of many engineers working on
complex projects. The technical leader provides the strategic direction for software testing,
assists test managers in collecting test metrics, coordinates test plan review meetings, attends
entry/exit criteria review meetings, and participates in cross-functional review meetings such as
for requirements review, functional specification review, and entry/exit criteria selection.

20. What are the ideas should be considered to maximize productivity and efficiency
through their test laboratory?

The ideas should be considered to maximize productivity and efficiency through their test
laboratory is,

(i) creating more than one test environment,


(ii) purchasing new tools and equipment,
(iii) updating existing equipment,
(iv) replacing old equipment,
(v) keeping the test environments clean,
(vi) Keeping an inventory of all the equipment in the laboratory.

21. List the origin of defects.

 Lack of Education
 Poor Communication
 Oversight
 Transcription
 Immature Process

PART-B

1. Illustrate items to be recorded in the management review report


2. Discuss the various activities involved in testing?
3. Give the role and responsibility of different test groups in detail
4. Describe the major objectives of software automation
5. Explain the system test hierarchy and list the parameters to be considered for team bulding.

UNIT-II
PART-A

1. Define System Integration Techniques


One of the objectives of integration testing is to combine the software modules into a
working system so that system-level tests can be performed on the complete system.
Integration testing need not wait until all the modules of a system are coded and unit tested.
Instead, it can begin as soon as the relevant modules are available.

2. What are the approaches to performing system integration?

Some common approaches to performing system integration are as follows:

• Incremental
• Top down
• Bottom up
• Sandwich
• Big bang

3. What is non terminal module?

An internal module, also known as a nonterminal module, performs some computations,


invokes its subordinate modules, and returns control and results to its caller.

4. What are the the advantages of the top-down approach?


The advantages of the top-down approach are as follows:

• Test cases designed to test the integration of a module M are reused during the
regression tests performed after integrating other modules.

• Since test inputs are applied to the top-level module, it is natural that those test cases
correspond to system functions, and it is easier to design those test cases than test cases designed
to check internal system functions. Those test cases can be reused while performing the more
rigorous, system-level tests.

5. List the comparison of top-down and bottom-up approaches:?


• Validation of Major Design Decisions
• Observation of System-Level Functions
• Difficulty in Designing Test Cases
• Reusability of Test Cases
6. Define Sandwich
In the sandwich approach, a system is integrated by using a mix of the top-down and
bottom-up approaches. A hierarchical system is viewed as consisting of three layers.
The bottom layer contains all the modules that are often invoked. The top layer
contains modules implementing major design decisions. The rest of the modules are
put in the middle layer.

7. What are the phases in hardware engineering process?

Hardware engineering process is viewed as consisting of four phases: (i) planning and
specification, (ii) design, prototype implementation, and testing, (iii) integration with the
software system, and (iv) manufacturing, distribution, and field service.

8. What is diagnostic test?


Diagnostic tests are the most fundamental hardware tests. Such tests are often imbedded
in the basic input–output system (BIOS) component and are executed automatically whenever
the system powers up. The BIOS component generally resides on the system’s read-only
memory (ROM). A diagnostic test is the first test performed to isolate a faulty hardware module.

9. What is MTBF?
Failure rate is often measured in terms of the mean time between failures (MTBF), and it is
expressed as MTBF = total time/number of failures. The probability that a module will work for
some time T without failure is given by R(T) = exp(−T/MTBF). The MTBF metric is a reliability
measurement metric for hardware modules. It is usually given in units of hours, days, or months.

10. What is Pairwise Validity?


Tests are performed to ensure that any two systems work properly when connected
together by a network. The difference between pairwise tests and end-to-end tests lies in the
emphasis and type of test cases. For example, a toll-free call to an 800 number is an end-to-end
test of a telephone system, whereas a connectivity test between a handset and local private
branch exchange (PBX) is a pairwise test.

11. Define Error Handling and Event Handling?


12. • Error Handling: Trigger errors that should be handled by the modules.
13. • Event Handling: Trigger events (e.g., messages, timeouts, callbacks) that should be
handled by the modules.

14. Define Boundary Value Analysis?


The central idea in boundary value analysis (BVA) is to select test data near the
boundary of a data domain so that data both within and outside an EC are
selected. It produces test inputs near the boundaries to find failures caused by
incorrect implementation of the boundaries. The BVA technique is an extension
and refinement of the EC partitioning technique.
15. Define Decision Table.
Different combinations of equivalent classes can be tried by using a new technique based on the
decision table to handle multiple inputs. Decision tables have been used for many years as a
useful tool to model software requirements and design decisions. It is a simple, yet powerful
notation to describe complex systems from library information management systems to
embedded real-time systems.

16. What are the conditions that the Decision table–based testing is effective?
Decision table–based testing is effective under certain conditions as follows:
• The requirements are easily mapped to a decision table.
• The resulting decision table should not be too large. One can break down a large
decision table into multiple smaller tables.
• Each column in a decision table is independent of the other columns.

17. Give an example for selection of acceptance criteria?


IBM used the quality attribute list CUPRIMDS - capability, usability, performance, reliability,
installation, maintenance, documentation, and service for its products. For example, usability and
maintainability take precedence over performance and reliability for a word processor software.

18. What is ATP?


Planning for acceptance testing begins as soon as the acceptance criteria are known. Early
development of an acceptance test plan (ATP) gives us a good picture of the final product. The
purpose of an ATP is to develop a detailed outline of the process to test the system prior to
making a transition to the actual business use of the system.
19. What is the structure of a typical ATP?
20. What are the assumptions to be followed during the development of reliability models ?

The reliability models are developed based on the following assumptions:


• Faults in the program are independent.
• Execution time between failures is large with respect to instruction execution time.
• Potential test space covers its use space.
• The set of inputs per test run is randomly selected.
• All failures are observed.
• The fault causing a failure is immediately fixed or else its reoccurrence is not counted
again.

21. What is System Testing?

System testing means testing the system as a whole. All the modules or components are
integrated in order to verify if the system works as expected or not. System testing is done
after integration testing. This plays an important role in delivering a high-quality product.

22. What is BVA? Give Examples

Boundary Value Analysis (BVA) is a black box test design technique based on test cases.
This technique is applied to see if there are any defects at the boundary of the input
domain.BVA helps in testing the value of boundary between both valid and invalid boundary
partitions.
For example
Consider the testing of a software program that takes the integers ranging between the values
of -100 to +100.
The TC for BVA are:
 Test cases with the data same as the input boundaries of input domain: -100 and +100
in our case.
 Test data having values just below the extreme edges of input domain: -101 and 99
 Test data having values just above the extreme edges of input domain: -99 and 101

PART-B

1. How to prioritize a set of tests for regression testing ? Explain the methodology for regression
testing.
2. Discuss about load ,performance and usability test with suitable example
3. What is Integration Testing? Why Integration testing? Explain the types
4. Explain briefly the various test architectures developed by ISO
5. Differentiate BB and WB testing with examples.
6. What is acceptance test? Discuss about selection of acceptance criteria of an acceptance test
plan.
UNIT-III

1. List the taxonomy of system tests?

2. What is Functionality testing?

Functionality tests verify the system as thoroughly as possible over the full range of requirements
specified in the requirements specification document.

3. What are the usability characteristics can be tested?


The usability characteristics which can be tested include the following:

• Accessibility: Can users enter, navigate, and exit with relative ease?

• Responsiveness: Can users do what they want and when they want in a way that is
clear? It includes ergonomic factors such as color, shape, sound, and font size.

• Efficiency: Can users do what they want with a minimum number of steps and time?

• Comprehensibility: Do users understand the product structure with a minimum amount


of effort?
4. What is Security Test?
Security tests are designed to verify that the system meets the security requirements:
confidentiality, integrity, and availability. Confidentiality is the requirement that data and the
processes be protected from unauthorized disclosure. Integrity is the requirement that data and
process be protected from unauthorized modification. Availability is the requirement that data
and processes be protected from the denial of service to authorized users.

5. Define Robustness Test and list its type


Robustness means how sensitive a system is to erroneous input and changes in its
operational environment. Tests in this category are designed to verify how gracefully the system
behaves in error situations and in a changed operational environment.

6. Define Load and stability tests


Load and stability tests are designed to ensure that the system remains stable for a long
period of time under full load. Load and stability testing typically involves exercising the
system with virtual users and measuring the performance to verify whether the system can
support the anticipated load. This kind of testing helps one to understand the ways the system
will fare in real-life situations.

7. What are the different types of Software systems?


Software systems can be broadly classified into two groups, namely, stateless and state-oriented
systems. The actions of a stateless system do not depend on the previous inputs to the system. A
state-oriented system memorizes the sequence of inputs it has received so far in the form of a
state.

8. What are finite state models?


A finite-state machine (FSM) or finite-state automaton (FSA, plural: automata), finite
automaton, or simply a state machine, is a mathematical model of computation. It is an abstract
machine that can be in exactly one of a finite number of states at any given time. The FSM can
change from one state to another in response to some external inputs; the change from one state
to another is called a transition. An FSM is defined by a list of its states, its initial state, and the
conditions for each transition
9. What are the tuples in FSM
A FSM M is defined as a tuple as follows: M = <S, I,O, so, δ, λ>, where

S is a set of states,

I is a set of inputs,

O is a set of outputs,

so is the initial state,

δ : S × I →S is a next-state function, and

λ : S × I →O is an output function.

10. What are the important points to be considered in Distributed Architecture?


Three important points should be kept in mind with this architecture:

• The lower tester and IUT are physically separated with the implication that they perhaps
observe the same test event at different times.

• Delivery out of sequence, data corruption, and loss of data are possible because of the
unreliable quality of the lower service provider.

• Synchronization and control (test coordinate procedures) between upper and lower
testers are more difficult due to the distributed nature of the test system.

11. What are the factors to be considered for Test Design?


The following factors considered during test design are:
(i) coverage metrics,
(ii) effectiveness,
(iii) productivity,
(iv) validation,
(v) maintenance, and
(vi) user skill.
12. Draw the state transition model for requirement

13. What are the states in modeling a test design process?


Create State
Draft State
Review and Deleted States
Released and Update States
Deprecated State

14. How test case design effectiveness are measured?


A metric commonly used in the industry to measure test case design effectiveness is the test case
design yield (TCDY), defined as

Where NPT is Number of Planned Test


TCE is Test Case Escaped

15. Define priority and severity.


Two key concepts involved in modeling defects are the levels of priority and severity. A priority
level is a measure of how soon the defect needs to be fixed, that is, urgency. A severity level is a
measure of the extent of the detrimental effect of the defect on the operation of the product.
Therefore, priority and severity assignments are separately done.

16.What are the Metrics for Monitoring Test Execution?

Test Case Escapes (TCE)


Planned versus Actual Execution (PAE) Rate
Execution Status of Test (EST) Cases
18. Define Cause and Effect diagram
The causes of manufacturing defects are analyzed by using the idea of a quality circle,
which uses cause–effect diagrams and Pareto diagrams. A cause–effect diagram is also called an
Ishikawa diagram or a fishbone diagram.
19. What are the 3 principles of DCA?
Card suggests three key principles to drive DCA:
• Reduce the number of defects to improve quality
• Apply local expertise where defects occur
• Focus on systematic errors

20. Define Beta Testing


Beta testing is conducted by the potential buyers prior to the official release of the
product. The purpose of beta testing is not to find defects but to obtain feedback from the field
about the usability of the product. There are three kinds of beta tests performed based on the
relationships between the potential buyers and the sellers.
• Marketing Beta
• Technical Beta
• Acceptance Beta

PART-B

1. Describe the finite state machine with an example


2. With an example explain transition tour mechanism.
3. Explain in detail the reliability model and its significance
4. Discuss about load, stability and reliability with suitable examples.
5. Explain the metrics for monitoring test execution
6. Explain about different defect reports and casual analysis.
7. Explain the various methodologies and standards for defect prevention.

UNIT-IV

PART-A

1. What does ISO-9126 framework provides?


ISO-9 126 (ISO, 2001) provides a hierarchical framework for quality definition, organized into
quality characteristics and sub-characteristics. There are six top-level quality characteristics
Functionality
Reliability
Usability
Efficiency
Maintainability
Portability
2. What does People‘s Quality Expectations relay upon?
People’s quality expectations for software systems they use and rely upon are two-fold:
1. The software systems must do what they are supposed to do. In other words, they must do the
right things.
2. They must perform these specific tasks correctly or satisfactorily. In other words, they must do
the things right.

3. Define Reliability
Reliability: A set of attributes that bear on the capability of software to maintain its level of
performance under stated conditions for a stated period of time. The sub-characteristics include:
- Maturity
- Fault tolerance
- Recoverability

4. Define Quality Factors


A quality factor represents a behavioral characteristic of a system. Some examples of high-level
quality factors are correctness, reliability, efficiency, testability, portability, and reusability.

5. How Quality Factors has been classified?

Quality factors have been grouped into three broad categories as follows:
• Product operation
• Product revision
• Product transition

6. Define Quality Criteria


A quality criterion is an attribute of a quality factor that is related to software development. For
example, modularity is an attribute of the architecture of a software system. A highly modular
software allows designers to put cohesive components in one module, thereby increasing the
maintainability of the system.

7. What are the three components of the ISO 9000:2000 standard?


There are three components of the ISO 9000:2000 standard as follows:
ISO 9000 : Fundamentals and vocabulary
ISO 9001 : Requirements
ISO 9004 : Guidelines for performance improvements

8. What are the eight principles of ISO 9000:2000 ?


• Principle 1. Customer Focus
• Principle 2. Leadership
• Principle 3. Involvement of People
• Principle 4. Process Approach
• Principle 5. System Approach to Management
• Principle 6. Continual Improvement
• Principle 7. Factual Approach to Decision Making
• Principle 8. Mutually Beneficial Supplier Relationships
9. What are the attributes of capability maturity model?
In software development processes we seek three desirable attributes as follows:
1. The products are of the highest quality. Ideally, a product should be free of defects. However,
in practice, a small number of defects with less severe consequences are generally tolerated.
2. Projects are completed according to their plans, including the schedules.
3. Projects are completed within the allocated budgets.

10. What are the five levels of maturity?


Level 1: Initial
Level 2: Repeatable
Level 3: Defined
Level 4: Managed
Level 5: Optimizing

11. Define CMM


A CMM is a reference model of mature practices in a specific discipline used to apprise and
improve a group’s capability to perform that discipline. The CMMs differ in three ways, namely,
discipline, structure, and definition of maturity.

12. How CMMI does derive the information?


The CMMI includes information from the following models:
• Capability maturity model for software (CMM-SW)
• Integrated product development capability maturity model (IPD-CMM)
• Capability maturity model for systems engineering (CMM-SE)
• Capability maturity model for supplier sourcing (CMM-SS)

13. List some test activities


• Identifying test goals
• Preparing a test plan
• Identifying different kinds of tests
• Hiring test personnel
• Designing test cases
• Setting up test benches
• Procuring test tools
• Assigning test cases to test engineers
• Prioritizing test cases for execution

14. What is TMM?


The TMM gives guidance concerning how to improve a test process. Each stage is
characterized by the concepts of maturity goals, supporting maturity goals, and activities,
tasks, and responsibilities.

15. What are the maturity goals?


The maturity goals are as follows:
• Develop testing and debugging goals.
• Initiate a test planning process.
• Institutionalize basic testing techniques and methods.

16. What are the five levels of the TMM model?


Level 1. Initial
Level 2. Phase Definition
Level 3. Integration
Level 4. Management and Measurement
Level 5. Optimization, Defect Prevention, and Quality Control

17. What are the activities of test optimization?


Test optimization involves the following activities:
• Identify the testing practices that can be improved.
• Define a mechanism to improve an identified practice.
• Put a mechanism in place to track the practice improvement mechanism.
• Continuously evaluate new test-related technologies and tools.
• Develop management support for technology transfer.

PART-B

1. Briefly explain McCall’s quality factors and quality criteria


2. What does ISO-9126 framework provides and explain it in detail?
3. Explain ISO 9000:2000 Software Quality Standard in detail
4. Explain Test Process Improvement
5. Briefly explain the different levels of TMM in terms of their maturity goals
6. Compare and contrast the various quality assurance techniques and activities

UNIT-V

PART-A

1. What is Root Cause Analysis?


Root cause analyses can be performed on the product under development to identify the
common defects and their causes, so that appropriate defect prevention activities can be selected
and applied. Root cause analyses can usually taken two forms such as Logical analysis and
Statistical analysis

2. What is formal verification?


Formal verification checks the conformance of software design or code to these formal
specifications, thus ensuring that the software is fault-free with respect to its formal
specifications. Therefore, formal specification can be considered as a defect prevention strategy.
3. What are the steps involved in accident prevention?
Accident prevention includes two generic steps:
1. Analysis of actual or potential accident scenarios with a focus on preconditions, or hazards, for
these scenarios. This type of analysis is called hazard analysis.
2. Preventive or remedial actions for accident prevention, referred to as hazard resolution, to deal
with the hazards identified in the above analysis. Generic ways include hazard elimination,
hazard reduction, and hazard control.

4. What are the basic ideas of fault-tree analysis (FTA)?

The basic ideas of fault-tree analysis (FTA) are


 The basic analysis tool is logical diagrams called fault-trees, which also represent the
analysis results. Nodes in a fault-tree represent various events or conditions and are
connected through logical connectors, AND, OR, NOT, to represent logical relations
among sub-conditions.
 The analysis follows a top-down procedure: starting with the top event and recursively
analyzing each event or condition to find out its logical conditions or sub-conditions.
The top event is usually associated with an accident and is represented as the root node
of the tree.

5. What is Defect perspective?

Is the QA technique dealing with errors, faults, or failures? This question can be broken down
further with the execution/observation of the specific QA activities and the follow-up actions,
where different defect perspectives may be taken. For example, during testing, failures are
observed, which lead to follow-up actions to locate and fix the faults that caused these observed
failures.

6. How Questions and criteria can be related to cost?

If the total cost can be calculated for each QA alternative, then it can be used together with the
benefit assessment to select appropriate ones for a specific environment. The direct cost for
carrying out the planned QA activities typically involves the time and effort of the software
professionals who perform related activities and the consumption of other resources such as
computer systems and supporting facilities. In addition, there are also indirect costs, such as
training project participants, acquisition and support for related software tools, meeting time and
other overhead.

7. What are the objects of QA alternatives?


8. What is the expertise level required for QA alternatives?

9. How Defect perspective can be examined?


Defect perspective examination can be broken down further into two parts:
 Detection or observation of specific problems from specific defect perspectives during
the performance of specific QA activities.
 Types of follow-up actions that deal with the observed or detected problems in specific
ways as examined from the defect perspectives.

10. What are different problem types dealt with by QA alternatives?

11. What is QA monitoring?


The primary purpose of QA monitoring is to ensure proper execution of the planned QA
activities through the use of various measurements. These measurements also provide the data
input to subsequent analysis and modeling activities.

12. What are measurements for quality control?


Direct quality measurements: Result and defect measurements
Indirect quality measurements: Environmental, product internal, and activity measurements

13. What are basic ideas of risk identification?

The basic idea of risk identification is to use predictive modeling to focus on the high-risk areas,
as follows:
 First, we need to establish a predictive relationship between project metrics and actual
product defects based on historical data.
 Then, this established predictive relation is used to predict potential defects for the new
project or new product release once the project metrics data become available, but before
actual defects are observed in the new project or product release.
 In the above prediction, the focus is on the high-risk or the potentially high-defect
modules or components.

14. What are the characteristics of web-based applications?


Web applications possess various unique characteristics that affect the choices of
appropriate techniques for web testing. One of the fundamental differences is the document and
information focus for the web as compared to the computational focus for most traditional
software.

15. What are the methods different layers of web applications?

PART-B

1. Explain briefly the FSM-Based Testing of Web-Based Applications


2. Briefly explain the various sources employed in quality monitoring and measurement
3. How risk identification is employed for quality improvement. Discuss
4. Explain the three types of indirect measurements in QA.
5. Brief out the different analysis techniques used for hazard analysis
6. What is ECO? Explain hardware and software ECO in detail
7. What is fault tree and event tree analysis? Explain

You might also like