You are on page 1of 43

Subjective paper:

1 Your client has not done acceptance testing in the past, but is fully aware of the
benefits of the same. He is asking you about the contents that should go into the
acceptance test plan. What would you suggest?
- 10 points

2.Give examples and define the following:


(1) Equivalence partioning (2) Boundary values and (3) Error guessing
- 15 points

3. You are developing an e-commerce application for the first time. Knowing well
that the same cannot be validated on all the browsers and operation system
versions, what will be measure taken by you in the test plan to provide adequate
confidence that the application would work satisfactorily.
-10 points

4. Define the standards for the following:


a. Test plan b. Test scripts and c.Test Report
-15 points

5.You have estimated a testing phase of 4 weeks in your test plan. Due to changing
requirements, the development team delays the handoff to you (the testing team) by
a week. But your final shipment date is the same. Without change in any of the
other resources (man power, or any other resource), what would you do in order to
cater to the situation (one week less for testing).
- 15 points.

6. Why should you test the application?


- 15 points

7. Testing is costly for any organisation. You feel that the organistation is
emphasising on testing more and infact all the applications are overtested. What are
the symptoms/consequences that makes you sure that the application is overtested?
- 15 points

8. You are working on a project, where the requirements change dynamically. The
data in the project comes from various ends (from various platforms) and are inter-
dependent.You see this as a big risk in the project. How would you plan accordingly ?
- 35 points

9. a. What would be the Test objective for unit testing


b. What would be the quality measurements to assure that unit testing is
complete?
c. (Not sure)- One more aspect of Unit testing.
-15 points

10. Define and explain any three aspects of Code review?


- 15 points

11 How would you measure the following?


a. Test Effectiveness
b. Test Efficiency
- 10 points

12. You are given two scenarios to test. Scenario 1 has only one terminal for entry
and processing whereas scnerio 2 has several terminals where the data input can be
made. Assuming that the processing work is the same, what would be the specific
tests in Scenario2 that you would do, which you would not carry on scenario 1.
- 15 points.

--------------------------------------------------------------------

Objective questions: There were 48 objective questions in the first paper and 50
questions in the second paper . All of them were from the CBOK guide. Read the
book throughly and you can be able to answer them:

Some of them releates to these:


1. Testing is least costly in the REQUIREMENTS phase.
2. Black box testing - Functional requiremnts (Match the following)
3. White box testing - Logical testing(Match the following)
4. Test plan- Match the following (with appropriate definition)
5. Test objective- Match the following (with appropriate definition)
6. Test data - Match the following (with appropriate definition)
7. COQ comprises of -------
8. Training forms ------- phase of COQ.
9. Testing accounts for ------ percent of the system development cost.
10. A requirement defect found at production is ------- times costly to correct (The
choices are 5, 10,50 and 500 times)
11. Which is not a testing risk - 4 choices were given.
12. Which is not a sofware risk - 4 choices were given.
13. Integration testing occurs when - 4 choices were given.
14. Which should not be done during Constructive criticism - 4 choices were given.
15. The objective of closing the proposal should be -----------
16. Critical listening is ----- (four choices were given)
17. If there was a conflict, what would you do (All the subheadings in the CBOK for
conflict resolution were given as choices and there was choice named "All the
above").
18. Juran was famous for what ----- (four choices )
19. Pareto analysis is also called ----- (one of the choices was 20-80 rule)
20. The enitity to improve the testing process methodologicies - four choices (Quality
Assurance)
21. The responsibilty of testing lies on ----
22. Which is not a Tool ( the choices were checklist, test plan, .....)
23. About 60% of the total defects origniate from Requiements phase (True/False)
24. Match the following: (For the follwing definitions): 1. Incremental Testing 2.
Thread Testing 3. Test Data...
25. Out of the 5 perceptives of Quality, What is the perceptive which acounts for
"Fitness for use"
26.Incremental testing can be done in two ways. a. Top down and b. Bottom Up
(True/False)
27. Which of the following about testing is True (The choices were like Testing is
easy. Testing does not require any skill) etc.
28. Which of the following is NOT the seconday item for a tester ( 4 choices were
given)
29. Three questions on In process reviews
30. One question of phase end review.
31. Which is the important Test objective for Unit testing?
32. Which of the following is a Structural Tesing?
33. Configuartion management includes ---- (The choices were like done only after
development, only after coding , through out all the phases etc)

Courtesy : R Anand (quoted from his memory after writing the exam)
Testing related Q&A
What makes a good test engineer?
• A good test engineer has a 'test to break' attitude, an ability to take the point of view of
the customer, a strong desire for quality, and an attention to detail.

• Tact and diplomacy are useful in maintaining a cooperative relationship with developers,
and an ability to communicate with both technical (developers) and non-technical
(customers, management) people is useful.

• Previous software development experience can be helpful as it provides a deeper


understanding of the software development process, gives the tester an appreciation for
the developers' point of view, and reduce the learning curve in automated test tool
programming.

• Judgement skills are needed to assess high-risk areas of an application on which to


focus testing efforts when time is limited.
What makes a good software QA engineer?
• The same qualities a good tester has are useful for a QA engineer.

• Additionally, they must be able to understand the entire software development process
and how it can fit into the business approach and goals of the organization.

• Communication skills and the ability to understand various sides of issues are important.

• In organizations in the early stages of implementing QA processes, patience and


diplomacy are especially needed.

• An ability to find problems as well as to see 'what's missing' is important for inspections
and reviews.
What makes a good QA or Test manager?
• A good QA, test, or QA/Test (combined) manager should:

• be familiar with the software development process

• be able to maintain enthusiasm of their team and promote a positive atmosphere, despite what is a
somewhat 'negative' process (e.g., looking for or preventing problems)

• be able to promote teamwork to increase productivity

• be able to promote cooperation between software, test, and QA engineers

• have the diplomatic skills needed to promote improvements in QA processes

• have the ability to withstand pressures and say 'no' to other managers when quality is insufficient
or QA processes are not being adhered to

• have people judgement skills for hiring and keeping skilled personnel

• be able to communicate with technical and non-technical people, engineers, managers, and
customers.

• be able to run meetings and keep them focused


What's the role of documentation in QA?

• Critical. (Note that documentation can be electronic, not necessarily paper.)


• QA practices should be documented such that they are repeatable.
• Specifications, designs, business rules, inspection reports, configurations, code
changes, test plans, test cases, bug reports, user manuals, etc. should all be
documented.
• There should ideally be a system for easily finding and obtaining documents and
determining what documentation will have a particular piece of information.
• Change management for documentation should be used if possible.

What's the big deal about 'requirements'?

• One of the most reliable methods of insuring problems, or failure, in a complex


software project is to have poorly documented requirements specifications.
• Requirements are the details describing an application's externally-perceived
functionality and properties.
• Requirements should be clear, complete, reasonably detailed, cohesive,
attainable, and testable. A non-testable requirement would be, for example, 'user-friendly'
(too subjective).
• A testable requirement would be something like 'the user must enter their
previously-assigned password to access the application'.
• Determining and organizing requirements details in a useful and efficient way can
be a difficult effort; different methods are available depending on the particular project.
• Many books are available that describe various approaches to this task.
• Care should be taken to involve ALL of a project's significant 'customers' in the
requirements process.
• 'Customers' could be in-house personnel or out, and could include end-users,
customer acceptance testers, customer contract officers, customer management, future
software maintenance engineers, salespeople, etc.
• Anyone who could later derail the project if their expectations aren't met should
be included if possible.
• Organizations vary considerably in their handling of requirements specifications.
• Ideally, the requirements are spelled out in a document with statements such as
'The product shall.'.
• 'Design' specifications should not be confused with 'requirements'; design
specifications should be traceable back to the requirements.
• In some organizations requirements may end up in high level project plans,
functional specification documents, in design documents, or in other documents at
various levels of detail.
• No matter what they are called, some type of documentation with detailed
requirements will be needed by testers in order to properly plan and execute tests.
• Without such documentation, there will be no clear-cut way to determine if a
software application is performing correctly.

What steps are needed to develop and run software tests?

The following are some of the steps to consider:


• Obtain requirements, functional design, and internal design specifications and other necessary
documents

• Obtain budget and schedule requirements


• Determine project-related personnel and their responsibilities, reporting requirements, required
standards and processes (such as release processes, change processes, etc.)

• Identify application's higher-risk aspects, set priorities, and determine scope and limitations of
tests

• Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.

• Determine test environment requirements (hardware, software, communications, etc.)

• Determine testware requirements (record/playback tools, coverage analyzers, test tracking,


problem/bug tracking, etc.)

• Determine test input data requirements

• Identify tasks, those responsible for tasks, and labor requirements

• Set schedule estimates, timelines, milestones

• Determine input equivalence classes, boundary value analyses, error classes

• Prepare test plan document and have needed reviews/approvals

• Write test cases

• Have needed reviews/inspections/approvals of test cases

• Prepare test environment and testware, obtain needed user manuals/reference


documents/configuration guides/installation guides, set up test tracking processes, set up logging
and archiving processes, set up or obtain test input data

• Obtain and install software releases

• Perform tests

• Evaluate and report results

• Track problems/bugs and fixes

• Retest as needed

• Maintain and update test plans, test cases, test environment, and testware through life cycle
What's a 'test plan'?

A software project test plan is a document that describes the objectives, scope, approach,
and focus of a software testing effort. The process of preparing a test plan is a useful way to
think through the efforts needed to validate the acceptability of a software product. The
completed document will help people outside the test group understand the 'why' and 'how' of
product validation. It should be thorough enough to be useful but not so thorough that no one
outside the test group will read it. The following are some of the items that might be included
in a test plan, depending on the particular project:
• Title

• Identification of software including version/release numbers

• Revision history of document including authors, dates, approvals


• Table of Contents

• Purpose of document, intended audience

• Objective of testing effort

• Software product overview

• Relevant related document list, such as requirements, design documents, other test plans, etc.

• Relevant standards or legal requirements

• Traceability requirements

• Relevant naming conventions and identifier conventions

• Overall software project organization and personnel/contact-info/responsibilties

• Test organization and personnel/contact-info/responsibilities

• Assumptions and dependencies

• Project risk analysis

• Testing priorities and focus

• Scope and limitations of testing

• Test outline - a decomposition of the test approach by test type, feature, functionality, process,
system, module, etc. as applicable

• Outline of data input equivalence classes, boundary value analysis, error classes

• Test environment - hardware, operating systems, other required software, data configurations,
interfaces to other systems

• Test environment setup and configuration issues

• Test data setup requirements

• Database setup requirements

• Outline of system-logging/error-logging/other capabilities, and tools such as screen capture


software, that will be used to help describe and report bugs

• Discussion of any specialized software or hardware tools that will be used by testers to help track
the cause or source of bugs

• Test automation - justification and overview

• Test tools to be used, including versions, patches, etc.

• Test script/test code maintenance processes and version control

• Problem tracking and resolution - tools and processes

• Project test metrics to be used


• Reporting requirements and testing deliverables

• Software entrance and exit criteria

• Initial sanity testing period and criteria

• Test suspension and restart criteria

• Personnel allocation

• Personnel pre-training needs

• Test site/location

• Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact
persons, and coordination issues

• Relevant proprietary, classified, security, and licensing issues.

• Open issues

• Appendix - glossary, acronyms, etc.


What's a 'test case'?

• A test case is a document that describes an input, action, or event and an expected response, to
determine if a feature of an application is working correctly.

• A test case should contain particulars such as test case identifier, test case name, objective, test
conditions/setup, input data requirements, steps, and expected results.

• Note that the process of developing test cases can help find problems in the requirements or design of
an application, since it requires completely thinking through the operation of the application. For this
reason, it's useful to prepare test cases early in the development cycle if possible.
What should be done after a bug is found?

The bug needs to be communicated and assigned to developers who can fix it.
After the problem is resolved, fixes should be re-tested, and determinations made regarding
requirements for regression testing to check that fixes didn't create problems elsewhere. If a
problem-tracking system is in place, it should encapsulate these processes. A variety of
commercial problem-tracking/management software tools are available . The following are
items to be considered in the tracking process:
• Complete information such that developers can understand the bug, get an idea of it's severity, and
reproduce it if necessary.

• Bug identifier (number, ID, etc.)

• Current bug status (e.g., 'Released for Retest', 'New', etc.)

• The application name or identifier and version

• The function, module, feature, object, screen, etc. where the bug occurred

• Environment specifics, system, platform, relevant hardware specifics

• Test case name/number/identifier


• One-line bug description

• Full bug description

• Description of steps needed to reproduce the bug if not covered by a test case or if the developer
doesn't have easy access to the test case/test script/test tool

• Names and/or descriptions of file/data/messages/etc. used in test

• File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in
finding the cause of the problem

• Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)

• Was the bug reproducible?

• Tester name

• Test date

• Bug reporting date

• Name of developer/group/organization the problem is assigned to

• Description of problem cause

• Description of fix

• Code section/file/module/class/method that was fixed

• Date of fix

• Application version that contains the fix

• Tester responsible for retest

• Retest date

• Retest results

• Regression testing requirements

• Tester responsible for regression tests

• Regression testing results


A reporting or tracking process should enable notification of appropriate personnel at various stages.
For instance, testers need to know when retesting is needed, developers need to know when bugs are found
and how to get the needed information, and reporting/summary capabilities are needed for managers.

What is 'configuration management'?

Configuration management covers the processes used to control, coordinate, and track: code,
requirements, documentation, problems, change requests, designs,
tools/compilers/libraries/patches, changes made to them, and who makes the changes.
What if the software is so buggy it can't really be tested at all?

The best bet in this situation is for the testers to go through the process of reporting whatever
bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this
type of problem can severely affect schedules, and indicates deeper problems in the software
development process (such as insufficient unit testing or insufficient integration testing, poor
design, improper build or release procedures, etc.) managers should be notified, and provided
with some documentation as evidence of the problem.

How can it be known when to stop testing?

This can be difficult to determine. Many modern software applications are so complex, and run in
such an interdependent environment, that complete testing can never be done. Common factors
in deciding when to stop are:
• Deadlines (release deadlines, testing deadlines, etc.)

• Test cases completed with certain percentage passed

• Test budget depleted

• Coverage of code/functionality/requirements reaches a specified point

• Bug rate falls below a certain level

• Beta or alpha testing period ends


What if there isn't enough time for thorough testing?

Use risk analysis to determine where testing should be focused.

Since it's rarely possible to test every possible aspect of an application, every possible
combination of events, every dependency, or everything that could go wrong, risk analysis is
appropriate to most software development projects.
This requires judgement skills, common sense, and experience. (If warranted, formal methods
are also available.)

Considerations can include:

• Which functionality is most important to the project's intended purpose?

• Which functionality is most visible to the user?

• Which functionality has the largest safety impact?

• Which functionality has the largest financial impact on users?

• Which aspects of the application are most important to the customer?

• Which aspects of the application can be tested early in the development cycle?

• Which parts of the code are most complex, and thus most subject to errors?

• Which parts of the application were developed in rush or panic mode?

• Which aspects of similar/related previous projects caused problems?


• Which aspects of similar/related previous projects had large maintenance expenses?

• Which parts of the requirements and design are unclear or poorly thought out?

• What do the developers think are the highest-risk aspects of the application?

• What kinds of problems would cause the worst publicity?

• What kinds of problems would cause the most customer service complaints?

• What kinds of tests could easily cover multiple functionalities?

• Which tests will have the best high-risk-coverage to time-required ratio?

What if the project isn't big enough to justify extensive testing?

Consider the impact of project errors, not the size of the project.
However, if extensive testing is still not justified, risk analysis is again needed and the same
considerations as described previously in.
The tester might then do ad hoc testing, or write up a limited test plan based on the risk analysis.

What can be done if requirements are changing continuously?

A common problem and a major headache.


• Work with the project's stakeholders early on to understand how requirements might change so
that alternate test plans and strategies can be worked out in advance, if possible.

• It's helpful if the application's initial design allows for some adaptability so that later changes do
not require redoing the application from scratch.

• If the code is well-commented and well-documented this makes changes easier for the developers.

• Use rapid prototyping whenever possible to help customers feel sure of their requirements and
minimize changes.

• The project's initial schedule should allow for some extra time commensurate with the possibility
of changes.

• Try to move new requirements to a 'Phase 2' version of an application, while using the original
requirements for the 'Phase 1' version.

• Negotiate to allow only easily-implemented new requirements into the project, while moving more
difficult new requirements into future versions of the application.

• Be sure that customers and management understand the scheduling impacts, inherent risks, and
costs of significant requirements changes. Then let management or the customers (not the
developers or testers) decide if the changes are warranted - after all, that's their job.

• Balance the effort put into setting up automated testing with the expected effort required to re-do
them to deal with changes.

• Try to design some flexibility into automated test scripts.

• Focus initial automated testing on application aspects that are most likely to remain unchanged.
• Devote appropriate effort to risk analysis of changes to minimize regression testing needs.

• Design some flexibility into test cases (this is not easily done; the best bet might be to minimize
the detail in the test cases, or set up only higher-level generic-type test plans)

• Focus less on detailed test plans and test cases and more on ad hoc testing (with an understanding
of the added risk that this entails).

What if the application has functionality that wasn't in the requirements?

It may take serious effort to determine if an application has significant unexpected or hidden
functionality, and it would indicate deeper problems in the software development process.

If the functionality isn't necessary to the purpose of the application, it should be removed, as it
may have unknown impacts or dependencies that were not taken into account by the designer or
the customer.
If not removed, design information will be needed to determine added testing needs or regression
testing needs.
Management should be made aware of any significant added risks as a result of the unexpected
functionality.
If the functionality only effects areas such as minor improvements in the user interface, for
example, it may not be a significant risk.

How can Software QA processes be implemented without stifling


productivity?

By implementing QA processes slowly over time, using consensus to reach agreement on


processes, and adjusting and experimenting as an organization grows and matures, productivity
will be improved instead of stifled. Problem prevention will lessen the need for problem detection,
panics and burn-out will decrease, and there will be improved focus and less wasted effort. At the
same time, attempts should be made to keep processes simple and efficient, minimize
paperwork, promote computer-based processes and automated tracking and reporting, minimize
time required in meetings, and promote training as part of the QA process. However, no one -
especially talented technical types - likes rules or bureacracy, and in the short run things may
slow down a bit. A typical scenario would be that more days of planning and development will be
needed, but less time will be required for late-night bug-fixing and calming of irate customers.

What if an organization is growing so fast that fixed QA processes are


impossible?

This is a common problem in the software industry, especially in new technology areas. There is
no easy solution in this situation, other than:
• Hire good people

• Management should 'ruthlessly prioritize' quality issues and maintain focus on the customer

• Everyone in the organization should be clear on what 'quality' means to the customer

How does a client’server environment affect testing?

Client/server applications can be quite complex due to the multiple dependencies among clients,
data communications, hardware, and servers.
Thus testing requirements can be extensive. When time is limited (as it usually is) the focus
should be on integration and system testing.
Additionally, load/stress/performance testing may be useful in determining client/server
application limitations and capabilities.
There are commercial tools to assist with such testing.

How can World Wide Web sites be tested?

Web sites are essentially client/server applications - with web servers and 'browser' clients.
Consideration should be given to the interactions between html pages, TCP/IP communications,
Internet connections, firewalls, applications that run in web pages (such as applets, javascript,
plug-in applications), and applications that run on the server side (such as cgi scripts, database
interfaces, logging applications, dynamic page generators, asp, etc.).
Additionally, there are a wide variety of servers and browsers, various versions of each, small but
sometimes significant differences between them, variations in connection speeds, rapidly
changing technologies, and multiple standards and protocols.
The end result is that testing for web sites can become a major ongoing effort.
Other considerations might include:
• What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of
performance is required under such loads (such as web server response time, database query response
times).

• What kinds of tools will be needed for performance testing (such as web load testing tools, other
tools already in house that can be adapted, web robot downloading tools, etc.)?

• Who is the target audience? What kind of browsers will they be using? What kind of connection
speeds will they by using? Are they intra- organization (thus with likely high connection speeds and
similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?

• What kind of performance is expected on the client side (e.g., how fast should pages appear, how
fast should animations, applets, etc. load and run)?

• Will down time for server and content maintenance/upgrades be allowed? how much?

• What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it
expected to do? How can it be tested?

• How reliable are the site's Internet connections required to be? And how does that affect backup
system or redundant connection requirements and testing?

• What processes will be required to manage updates to the web site's content, and what are the
requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?

• Which HTML specification will be adhered to? How strictly? What variations will be allowed for
targeted browsers?

• Will there be any standards or requirements for page appearance and/or graphics throughout a site
or parts of a site??

• How will internal and external links be validated and updated? how often?

• Can testing be done on the production system, or will a separate test system be required? How are
browser caching, variations in browser option settings, dial-up connection variabilities, and real-world
internet 'traffic congestion' problems to be accounted for in testing?

• How extensive or customized are the server logging and reporting requirements; are they
considered an integral part of the system and do they require testing?
• How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked,
controlled, and tested?

• Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger,
provide internal links within the page.

• The page layouts and design elements should be consistent throughout a site, so that it's clear to
the user that they're still within a site.

• Pages should be as browser-independent as possible, or pages should be provided or generated


based on the browser-type.

• All pages should have links external to the page; there should be no dead-end pages.

• The page owner, revision date, and a link to a contact person or organization should be included
on each page.
How is testing affected by object-oriented designs?

Well-engineered object-oriented design can make it easier to trace from code to internal design to
functional design to requirements.
While there will be little affect on black box testing (where an understanding of the internal design
of the application is unnecessary), white-box testing can be oriented to the application's objects. If
the application was well-designed this can simplify test design.

What is Extreme Programming and what's it got to do with testing?

Extreme Programming (XP) is a software development approach for small teams on risk-prone
projects with unstable requirements.
It was created by Kent Beck who described the approach in his book 'Extreme Programming
Explained' .
Testing ('extreme testing') is a core aspect of Extreme Programming.
Programmers are expected to write unit and functional test code first - before the application is
developed.
Test code is under source control along with the rest of the code.
Customers are expected to be an integral part of the project team and to help develop scenarios
for acceptance/black box testing.
Acceptance tests are preferably automated, and are modified and rerun for each of the frequent
development iterations.

QA and test personnel are also required to be an integral part of the project team.

Detailed requirements documentation is not used, and frequent re-scheduling, re-estimating, and
re-prioritizing is expected
1. You are appointed Quality Assurance Manager. Your IT Director wants you to do the testing and
Quality Control review. How would you tell/convince your IT Director that this is not your
responsibility as a Quality Assurance Manager.
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________

2. Your operational department encounters production defect at the rate of 3 defects per 1000 lines of
code. During a vendor meet you come across another company XYZ which is of the size of yours and
in the same industry. This company’s production defect is 2 defects per 1000 lines of code. Can you
conclude that the XYZ’s operational department is better than yours? Can you use this for
benchmarking? Explain.
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________

3. List the measure/matrices you would use for the following:


i. Software Reliability
ii. Productivity
iii. Process Improvement
iv. Customer satisfaction

Note: Answer the following cases in not more than 5 lines. The answers can be bulleted with
the stated assumptions:
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
__________________________________________________________________________

4. Plan a test organization for testing mission critical, time critical software.
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________.
5. As a consultant to an organization involved in handling large software projects, how
would you impress on the management on the need for structured testing?
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______.

6. You are project leader of a team working on software for customer with changing
requirements. How would you plan the project?
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______.

7. You are heading Quality Control Team in testing a life critical software development
project. What will be your approach to testing and when would you advice to stop
testing?
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________

8. Your project is in a tight schedule and time available is very less. List major areas you will test in such
a situation. The test team does not know when to stop testing. Please recommend suitable action.
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
___________________.

9. One of the customer called the help desk and told that there is a problem in the software and the system
is not working. The help desk referred the problem to you. How will you go about handling it.
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________

10. Give examples and define the following:


(1) Equivalence partioning (2) Boundary values and (3) Error guessing
1. Decision/branch coverage strategy :
a) always satisfies statement coverage
b) is used in black box testing
c) means that every branch direction is traversed atleast once
d) is the same as condition coverage

Match the following


2. Retesting modules connected to the changed program - Black box testing
3. Determines how well the user can interact with the system - White box Testing
4. Demonstrates key functional capabilities by testing a string - System Testing
of units
5. Path or Logic in a unit or program is known - Regression Testing
6. Testing the functions of the program against the specifications - Thread Testing
7. Test all the integrated components of an information system - Usability Testing

8. Explain with examples : Equivalence Partitioning, Boundary analysis, Error guessing

9. Incremental testing can be done in two ways. a. Top down and b. Bottom Up (True/False)

10. Joe is performing a test to see that it complies with the user requirement that a certain field be
populated, by using a dropdown containing a list of values. Joe is performing:
a. White-box testing
b. Black Box testing
c. Load Testing
d. Regression testing

11. Acceptance testing means


a. Testing performed on a single stand-alone module or unit of code
b. Testing after changes have been made to ensure that no unwanted changes were
introduced
c. Testing to ensure that the system meets the needs of the organization and the end user

12. The purpose of software testing is to :


a. Demonstrate that the application works properly
b. Detect the existence of defects
c. Validate the logical design

13. Integration testing occurs when


a. The system is ready to use
b. Interfaces between unit-tested programs needs to be tested
c. When testing client/server applications
d. Testing the functions of the program against the specifications

14. In Management directive, IT management convenes a group of the more senior and respected
individuals in the dept. to develop a policy jointly (T/F)

15. Boundary value testing :


a. Is the same as equivalence partitioning tests
b. Is used in white-box testing strategy
c. Tests combination of input circumstances
d. Test conditions on, above and below edges of input and output equivalence classes

16. Bottom-up testing requires the development of interim stubs for test input (T/F)

17. Defects and Issues are identified and corrected during reviews (T/F)
18. The primary responsibility of individuals accountable for testing activities is
a. To find out defects
b. To understand the system
c. to ensure that quality is measured accurately
d. to prove that developers are inefficient

Match the following


19. Test Manager
a. Prepares test Documentation
b. Designs the Test strategy
c. Test data planning, capture and conditioning
d. All of the above

20. Test Engineer


a. Manages the Test effort
b. Tracks and reports Defects
c. Design artifacts
d. all of the above

Match the following


21. Performed when application components near completion - In Process Review
22. Coding begins at the close of this review - S/W requirements review
23. Looks at identifying defects as work progresses - Critical design review
24. CM Plan, Test plan and development plan are reviewed - Test readiness review

25. Formal reviews may be held at anytime (T/F)

26. Reviews provide training in and enforce the use of standards (T/F)

27. Verification and validation activities begin


a. After development phase
b. After Design phase
c. At the start of the project
d. Before closing the project

28. __________ activities define a specific implementation solution to solve the user’s needs
a. Analysis
b. Design
c. Conceptualization
d. Operation

29. System test plan generation and verification activity takes place during the
a. Analysis Activity
b. Design Activity
c. Conceptualization Activity
d. Operation Activity

30. Implementation activities support the use of software by the end user in an operational
environment (T/F)

31. Answer in brief


a. What would be the Test objective for unit testing
b. What would be the quality measurements to assure that unit testing is complete?
32. S/W engineer is responsible for
a. Identifying tool objectives
b. Defining selection criteria
c. Identifying candidate tools
d. All of the above

33. Mention the exit criteria for Integration Test

34. Define and explain any three aspects of Code review?

35. Integration test exit criteria :


a. 75-80% of total system functionality and 90% of major functionality delivered
b. No open severity 1 or 2 defects
c. Successful execution of integration test plan
d. All of the above
Objective Test - Category III
1. __________ is a probability that a loss will occur

2. What are the 2 major components of risk

a. Event and Loss


b. Probability and Loss
c. Judgement and Instinct

Match the Following


3. Description of the condition or capability of the system - Test Cases
4. Document that defines the overall testing objectives - Test Objective
5. Identifies all steps required to exercise the test cases - Test Plan
6. Refines Test Approach and identifies the features to be - Test Procedures
covered by the design and its associated tests
7. Document containing the actual values for input, along - Test Design
with the anticipated outputs
8. Statement of what the tester is expected to accomplish - Requirement
during the testing activity

9. Which is not a testing risk - 4 choices were given


a. Budget
b. Test Environment
c. Portability of the system
d. New Technology

10. Which is not a software risk - 4 choices were given


a. Availability of Automated Tools
b. System will be difficult to operate
c. Programs will be unmaintainable
d. Unreliable results from the system

11. ____________ % of system enhancements and repairs introduce new defects to the application
a. 10-20
b. 20-40
c. 20-50
d. 50-70

12. Which of the following information will not be included in a Defect Tracking Tool
a. Name of the tester who reported the defect
b. Person to whom the defect is assigned
c. Defect Number
d. Description

13. Configuration management includes –


Managing artifacts after development activity
Managing artifacts after Testing activity
Managing artifacts throughout all the phases

14. Requirements should include which of the following


Functionally what the program is to do
b. Form, format, data types and units
c. How exceptions, errors, deviations are to be handled
d. Technical architecture/target platform
e. All of the Above

24. Decision/branch coverage strategy :


a) always satisfies statement coverage
b) is used in black box testing
c) means that every branch direction is traversed atleast once
d) is the same as condition coverage

25. The process used for documenting user’s requirements is known as validation (T/F)

26. Normally an efficient testing team should have atleast 10 or more objectives (T/F)

27. Testing objectives should restate the project objectives from the project plan (T/F)

28. What are the guidelines for writing a Test plan

29. How would you measure the following?


a) Test Effectiveness
b) Test Efficiency

30. Environment competency is the ability to use the test environment established by test management (T/F)

31. Knowledge of the most common risks associated with software development and the platform you
are working on is known as ___________
a. Testing Risk
b. Software Risk
c. Business Risk
d. Premature Release Risk

32. Selecting the size and competency of staff needed to achieve the test plan objectives is
known as _________
a. Estimating
b. Scheduling
c. Staffing

33. What would be the Test objective for unit testing? What would be the quality measurements to
assure that unit testing is complete?

Match the following


34. Stress Testing Testing the Response time for the transactions in an
application
35. Load Testing Testing an application repeatedly over a specific time
period
36. Performance Testing Testing an application for n number of users
1. If you observe one of your audience members not being able to concentrate, what would you do and
why?
a. Ignore him
b. Ask what is troubling his mind, if you could help
c. Ask him to concentrate
d. Ask him to get out

2. How can you influence someone to perform in a certain way?

3. Give 3 reasons for why people are often not good listeners.

4. __________ is the most common form of communication

a. Oral Communication
b. Listening
c. Verbal Communication
d. Written Communication

5. What is the maximum amount of time you have with a complainer to begin offering solution to
their complaint?
a. 1minute
b. 2 minutes
c. 4 minutes
d. 30 minutes

6. You see a programmer constantly under performing. You know he is a good worker but of late he
has not been doing anything right. What would you do?
a. Throw him out of the job
b. Give him time, maybe he’ll turn around
c. Talk to him privately and try and understand his problem, provide constructive
feedback/criticism
d. Scold him in front of all his co-workers; it will surely make an impression

7. Which one of these is not a misconception about testing?


a. Anyone can test and no particular skill set is required
b. Testers delay a project
c. Testing itself is error prone.
d. Testing identifies errors and therefore puts the blame for those errors on the development
organization
e. Testers can test for quality at the end of the project

8. What are the Components of Effective Listening?

9. What is the Primary Objective of the system proposal from the Producer’s viewpoint
a. To present the costs/benefits of the proposal
b. To obtain an agreement for more work
c. To standardize presentations

10. What type of change do u need before you can obtain a behavior change?
a. Lifestyle
b. Vocabulary
c. Internal
d. Management

11. Which one is the best tactic in constructive criticism


a. Do it in public, while others are listening, so they too can learn from other peoples mistakes
b. Be prepared to help your subordinate improve his/her performance
c. Criticize the individual rather than the product, because the individual has created the product
d. Explain what will happen to his career if the employee’s behavior doesn’t change

11. One of the key concepts of a task force is that the leader be an expert in leading groups as opposed to
an expert in a topical area. (True/False)

12. Consensus means


a. Majority rules
b. You don’t have to like it, you just have to be able to accept it
c. Whatever the boss says
d. Compromise

13. It is normal for people to object to any new idea – (True/False)

14. Awareness training should be directed at selling benefits – (True/False)

15. Anyone who has vested interest in a topic should be invited to awareness training – (True/False)

16. The recommended course of action should be given immediately following awareness training –
(True/False)

17. Awareness training is only associated with problem resolution – (True/False)

18. Change normally will not occur unless the customer’s objections are not overcome – (True/False)

19. Awareness training should not last over 2 hours – (True/False)

20. __________ is the most powerful Quality Control Tool


a. Control Chart
b. Pareto Analysis
c. Checklist
d. Histogram

21. Awareness training should follow the same general administrative procedures as any other training
program conducted in the organization – (True/False

22. A business opportunity came your way when you were working with XYZ Ltd. This opportunity
would result in a conflict of interest between you and your company. What would you do?
a. Accept the opportunity as your own and inform the company
b. Reject the opportunity completely
c. Accept the opportunity and not inform the company
d. Hand over the opportunity to the company

23. The current project you are working on is under a lot of fire. The project manager is looking for
volunteers to take up more responsibilities. What would you do:
a. Completely ignore. You have already finished your own work so why should you bother
about others.
b. Analyze your schedule and decide how many responsibilities you can take and then talk to
your Pm to assign them to you
c. Take up all the responsibilities. At least you will be able to finish some.
d. Pick up one; after all it is your Project Manager asking

24. Which of these is the not an effective means of make a lasting impression in a presentation.
a. Give out handouts at the beginning of the presentation
b. Give out handouts at the end of the presentation
c. Tell what you are going to tell them, tell them, Repeat what u have told them
d. End the training with an action/assignment to take

25. Explain with example Vision, Goals, Principles and Values.

26. The conduct of inspection of source code is QA or QC?

27. Which of these is not a quality attribute?


a) Reliability
b) Transparency
c) Usability
d) Flexibility
e) Correctness

28. The National Quality Awards provide a basis for successfully benchmarking against companies
(True/False)

29. The cost of maintaining a helpdesk is which type of cost


a) Prevention
b) Appraisal
c) Failure

30. Who is responsible for ensuring quality in the organization


a) Employees
b) Management
c) Everybody involved in the working of the organization
d) The Quality Assurance Department

31. The cost of poor quality is what percentage of cost of doing business.
a) 15 - 50%
b) 50 - 80%
c) 80- 90%

32. Which of the following is not one of Deming’s Mgt Disease


a) Excessive medical cost
b) Mobility of Management
c) Evaluation of performance, merit rating, or annual review of performance
d) Breaking down barriers in work areas

33. Who propounded the theory “Quality is free”?


a) Malcolm Baldrige
b) Joseph M. Juran
c) Philip Crosby
d) Dr. Ishikawa
e) Edward Deming

34. Who amongst these is an internal customer?


a) The tester
b) The client
c) The final customer
d) The ISO auditor

35. Which of these would constitute Management by fact?


a) Collecting metrics on competitors business
b) Forming a process to perform and control work
c) Relying on instinct without any data
d) Picking the odd one out.

36. QA is the same as total QC (True/False)

37. _______ % of defects originate in the requirements stage


a) 10-20
b) 30-50
c) 50-70
d) 40-70

38. The intent of Quality Control Checklist is to ask questions (True/False)

39. In Pareto Analysis, 80 % of the items would account to 20 % of the frequency (True/False)

40. From Customer’s point of view, Quality is “Meeting Requirements” (True/False)

41. Which of the following is not a Deming’s principle


a) Eliminate Numerical goals
b) Mobility of Management
c) Breaking down barriers in work areas
Sample of a test policy.

All test activities should be systematically planned and tracked by the Project Manager.

• Testing should cover the system functionality in its entirety as documented in


the functional specification.
• Test activities should be conducted based on the test cases.
• Test defects should be captured and closed.
• Root cause analysis on defects should be performed and preventive actions
should be taken

Pl. find below the test objective which is written for a product.

1. To prepare the Test cases against the Functional


requirement document and the Customization specific to the
client.

2. To test the functionality of the product and the


customization specific to the client against the approved
Test cases by Test Lead.

Test Strategy is nothing but what are all the types of testing that you are going to address and how you are
going to achieve it.

Two components of Test Strategy:

Test Factor : What type of testing is required and how we are going to proceed with the testing.
Testing Phase : What type of testing occur in which phase. Like unit testing will be done after construction
phase, Integration testing will be done after integrating the modules.

For example : we have planned for Unit Testing , Integration Testing, and then System Testing then define
the scope for each type of testing.

Unit Testing: objective of unit testing has to be defined like, it is functionality testing or we are going to
test for the performance also. If we are going to use any testing techniques like branch and condition
testing and Loop testing that also we need to mention in our testing strategy.

For the application you said, plane takeoff can be written in module-A and Landing in Module-B.

Test strategy for unit testing is that each and every line of code has to be tested since it is machine critical
application.
Following testing techniques will be used in the unit testing phase:

1. Branch and condition testing.


2. Loop testing.
3. Performance testing.

Integration Testing: After the integration of Module-A and Module-B, we have to test for the functionality,
performance.
Following testing techniques will be used in Integration testing phase:
1. Top-down approach
2. bottom-up approach

System Testing(Black Box testing) : System testing should be done mostly in end users environment.

Following testing techniques will be used in system testing:

1. Boundary Value Analysis


2. Equivalence Partioning.
3. Any Flight simulation technique and how we are going to approach with that.

Regression Testing : To test for the modules which are undergone changes and the impacted areas.
Because of this changes existing functionality in the system should not get affected.

All the above said points should be in the test strategy according to the need. By identifying the type of
testing and their objective the risk will also be addressed.

Courtesy: V Suresh
The list of fields in Test Director-Defects Grid are given below. Beside the field name,
I've given a brief note as to what the field will have.

Defect ID - Defect Number (Automatically generated field)


Severity - Severity of the defect (fatal, medium, minor, severe, enhancement)
(mandatory field)
Module - Name of the module in which the defect was found(mandatory field)
Status - Status of the defect (open, fixed, pending, reopen, retest successful)
Assigned To - The person or team the defect is assigned to
Priority - Priority of the defect (low, medium, high, very high, urgent)
Detected By - the person who identified the defect (defaults to the login id)
(mandatory field)
Assign1 - The person or team the defect is assigned to (displays the list of login ids)
Summary - Summary of the description(mandatory field)
Detected on Date - Date on which the defect was identified(mandatory field)
Closing Date - Date on which the defect was closed
Category - Category in which the defect will fall into (Application Error, Environment,
Database Error)(mandatory field)
Description - Defect description (detail description of the defect)
R&D Comments - Comments
Actual Fix Time - Actual time taken for fixing the defect
Attachments - Whether any files were attached to support the defect description or
R&D comments etc(mandatory field)
Closed in Version - Handoff Number
Detected in Version - Handoff Number
Estimated Fix Time - Estimated time for fixing the defect
Modified - Last date on which the R & D comments were updated
Planned Closing Version - Handoff number
Project - Name of the project
Reproducible - whether defect is reproducible (Y or N)
Solution / Action - Whether solution is attached or not (Y or N)
Subject - helpful if TD is used as a repository for Testcases

Courtesy: Madhuri
How would you describe test design? Here is the IEEE/ANSI description.

------------------------------------------------------------------------------------------------------------
IEEE/ANSI Std 829-1983

Purpose:

To specify refinements of the test approach and to identify the features to be covered by the design and its
associated tests. It also identifies the test cases and test procedures, if any, required to accomplish the
testing and specifies the feature pass/fail procedure.

Outline:
-Test design specification identifier.
-Features to be tested.
-Approach refinements.
-Test case identification.
-Feature pass/fail criteria.

------------------------------------------------------------------------------------------------------------

When does test design begin? It begins in the requirements stage. Most bugs are introduced in the
requirements and design phase. What this means is the we need to be involved with the whole product life
cycle from the beginning.

The typical stage of test design starts with requirements generation. It is at this point that a high level test
design is done, plus a determination can be made of the components that would need to be tested, as well as
an overall integration.

This continues with the design stage, where more detailed test plans and details can be created.

When designing tests, there are some things to keep in mind: the main objectives.

Objective in test design:


1. Detect as many defects as possible.
2. Minimize test development costs.
3. Minimize test execution costs.
4. Minimize test maintenance costs.

Detailed design considerations:


1. Satisfying test development objectives.
2. Conforming to the test architecture.
3. Design of each test case.

The steps to detailed design:


1. Identifying the items that should be tested.
2. Assigning priorities to those items based on risk.
3. Designing high level test designs for similar groups of items.
4. Designing individual test cases based on the high level designs.
The determination must be made which of the following are highest to lowest priority in the testing process
to determine what is the critical path with regards to test design.
Types of system test:
1. Volume
2. Usability
3. Performance
4. Configuration
5. Compatibility
6. Reliability
7. Load/Stress
8. Security
9. Resource Usage
10. Installability
11. Recovery
12. Serviceability

The other things involved with test design involve determining the metrics that will be used to determine
when the testing group will sign off on the software for release. Also, what type of physical resources will
be necessary (lab space, hardware, other related tools, or any specific software and licenses).
1. Most QA groups infact practice QA – T/F False
2. Between QA & QC latter is most important – T/F

3. Quality is an attribute of a __product______. A product something_created_


(developed)
4. Both QA and QC required to make quality happen – T/F False
5. what are expected production costs?
a. labor, material, equipment
b. personnel, training, rollout
c. training, testing, user acceptance. ANS: A
6. What is involved in Total Product costs? all of the answers to #5
7. Appraisal costs are:
a. costs associated with preventing errors
b. costs associated with detecting errors
c. costs associated with defective products delivered to customers. ANS:B
8. Quality is _Everyone’s_ responsibility
9. Types of measurement of performance are:
a. strategic, statical, operational
b. strategic, tactical, operational ANS: B
10. Quality control is a _reactive_ function
11. QA is a _proactive_ function.
12. Validation means _testing_ of the final program with respect to _established criteria_
13. Certification means acceptance of software by an authorized agent after completion of
development cycle _ T/F
14. Verification generally means
a. Consistency
b. Completeness
c. Correctness ANS:A
15. Static analysis and dynamic analysis are only used during verification - T/F
16. Debugging occurs before the program or module is fully executable and continues until the
programmer feels the program is sound executable. – T/F False
17. Verification activities during design stages are _Walkthroughs_, _Reviews_ and _Inspections_
18. Verification activities during operation and maintenance is _Rework_
19. Invalid inputs are elements within the functional domain - T/F
20. Some typical techniques used during testing are
a. Intuition
b. Hand calculation
c. Simulation
d. Alternate solution to the same problem. ANS: C
21. Exhaustive input set of expected correct responses is equivalent to writing the entire program
itself – T/F False (guess)

22. A subset of the domain used in a testing process is called _Equivalence Partitioning_. (breaking
down data ie testing a rule that edits data on range of 5000 to 10000, there are three partitions to
test: under 5000, 5000-10000, and over 10000)
26 _Using a Checklist_ is the key to successful testing.
27 Having clear, concise statement of the problem to be solved will facilitate construction,
communication, error analysis and test data generation. T/F
28. Error analysis can be done at implementation stage T/F
29. testing metrics are _____, ___________, __________
30. Testing is used to exercise the code over a sufficient range of test data to verify its adherence to
the design and requirements specification. T/F True
31. List some criteria for the selection of test data for test set.
high-level requirements
production extract available
version control of data
base set of data
32. ____________ testing helps to compensate for the inability to do exhaustive function testing
33. Test coverage metrics are very useful for determining a ____________________ for the test
data.
34. The fraction of 'times per day' to 'number of days' of occurrence of event gives the likelihood of
occurrence of risk.
a) True
b) False
35. Data gathering while testing helps ______________
a) data analysis
b) to determine test cases
c) for conducting tests
d) input for testing ANS: B
1. Which is not a testing risk?
a. Budget
b. Test Environment
c. Portability of the system
d. New Technology

2. Which is not a software risk?


e. Availability of Automated Tools
f. System will be difficult to operate
g. Programs will be unmaintainable
h. Unreliable results from the system

3. Decision/branch coverage strategy:


a) always satisfies statement coverage
b) is used in black box testing
c) means that every branch direction is traversed atleast once
d) is the same as condition coverage

4. Joe is performing a test to see that it complies with the user requirement that a certain
field be populated, by using a dropdown containing a list of values. Joe is
performing:
a) White-box testing
b) Black Box testing
c) Load Testing
d) Regression testing

5. Acceptance testing means


d. Testing performed on a single stand-alone module or unit of code
e. Testing after changes have been made to ensure that no unwanted changes
were introduced
f. Testing to ensure that the system meets the needs of the organization and
the end user

6. The purpose of software testing is to:


d. Demonstrate that the application works properly
e. Detect the existence of defects
f. Validate the logical design

7. Integration testing occurs when


e. The system is ready to use
f. Interfaces between unit-tested programs needs to be tested
g. When testing client/server applications
h. Testing the functions of the program against the specifications
8. Boundary value testing :
a) Is the same as equivalence partitioning tests
b) Is used in white-box testing strategy
c) Tests combination of input circumstances
d) Test conditions on, above and below edges of input and output equivalence
classes

9. The primary responsibility of individuals accountable for testing activities is


a) To find out defects
b) To understand the system
c) to ensure that quality is measured accurately
d) to prove that developers are inefficient

10. Test Manager


a) Prepares test Documentation
b) Designs the Test strategy
c) Test data planning, capture and conditioning
d) All of the above

11. Test Engineer


a) Manages the Test effort
b) Tracks and reports Defects
c) Design artifacts
d) all of the above

12. Verification and validation activities begin


a) After development phase
b) After Design phase
c) At the start of the project
d) Before closing the project

13. System test plan generation and verification activity takes place during the
a) Analysis Activity
b) Design Activity
c) Conceptualization Activity
d) Operation Activity

14. S/W engineer is responsible for


a) Identifying tool objectives
b) Defining selection criteria
c) Identifying candidate tools
d) All of the above
15. Integration test exit criteria:
a) 75-80% of total system functionality and 90% of major functionality delivered
b) No open severity 1 or 2 defects
c) Successful execution of integration test plan
d) All of the above

Match the following with the choices given below: (Ans. given below)

1. Performed when application components near completion. Test Readiness Review


2. Coding begins at the close of this review c. Critical Design Review
3. Looks at identifying defects as work progresses. In Process Review
4. CM Plan, Test plan and development plan are reviewed. S/W Requirements Review
a. In Process Review
b. S/W Requirements Review
c. Critical Design Review
d. Test Readiness Review

Q5 - Bottom-up testing requires the development of interim stubs for test input
a. True
b. False

Q6 - __________ is the most powerful Quality Control Tool


a. Control Chart
b. Pareto Analysis
c. Checklist
d. Histogram

Q7 - Knowledge of the most common risks associated with software development


and the platform you are working on is known as ___________
a. Testing Risk
b. Software Risk
c. Business Risk
d. Premature Release Risk

Q8 - ___________ % of system enhancements and repairs introduce new defects


to the application
a. 20 – 40
b. 20 – 50
c. 50 – 70
d. 40

Q9 - _______ % of defects originate in the requirements stage


a. 10-20
b. 30-50
c. 50-70
d. 40-70

Q10 - _________ report can be prepared to show the status of testing


a. Testing Action Report
b. Final Test Report
c. Functional testing status report
d. Functions working Timeline

Q11 - Name few Test Coverage Tools

Q12 - ___________ % of system enhancements and repairs introduce new defects


to the application
a. 20 – 40
b. 20 – 50
c. 50 – 70
d. 40

Q13- Test Readiness review is conducted by


a. Test Engineer
b. Project Manager
c. Test Manager
d. Developer

Q14 - Integration testing occurs when _________ is complete


a. Unit testing
b. System Testing
c. Phase-end review
d. In-process Review

Q15- Match the following


Stress Testing Testing the Response time for the transactions in an application
Load Testing Testing an application repeatedly over a specific time period
Performance Testing Testing an application for n number of users

Q16 - Selecting the size and competency of staff needed to achieve the test plan
objectives is known as _________
a. Estimating
b. Scheduling
c. Staffing

Q17 - Decision/branch coverage strategy :


a. always satisfies statement coverage
b. is used in black box testing
c. means that every branch direction is traversed atleast once
d. is the same as condition coverage
Q18 - The process used for documenting user’s requirements is known as
validation (True/False)

Q19 - Normally an efficient testing team should have atleast 10 or more objectives
(True/False)

Q20 - Testing objectives should restate the project objectives from the project plan
(True/False)

Q21 - What are the guidelines for writing a Test plan

Q22 - How would you measure the following?


Test Effectiveness
Test Efficiency

Q23 - Environment competency is the ability to use the test environment


established by test management (True/False)

Q24 - Knowledge of the most common risks associated with software


development and the platform you are working on is known as ___________
a. Testing Risk
b. Software Risk
c. Business Risk
d. Premature Release Risk

Q25 - Match the Following


Description of the condition or capability of the system - Test Cases
Document that defines the overall testing objectives - Test Objective
Identifies all steps required to exercise the test cases - Test Plan
Refines Test Approach and identifies the features to be - Test Procedures
covered by the design and its associated tests
Document containing the actual values for input, along - Test Design
with the anticipated outputs
Statement of what the tester is expected to accomplish - Requirement
during the testing activity

Q26 - __________ activities define a specific implementation solution to solve


the user’s needs
a. Analysis
b. Design
c. Conceptualization
d. Operation

Q27 - System test plan generation and verification activity takes place during the
a. Analysis Activity
b. Design Activity
c. Conceptualization Activity
d. Operation Activity

Q28 - Implementation activities support the use of software by the end user in an
operational environment (True/False)

Q29 - Answer in brief


What would be the Test objective for unit testing
What would be the quality measurements to assure that unit
testing is complete?

Q30 - S/W engineer is responsible for


a. Identifying tool objectives
b. Defining selection criteria
c. Identifying candidate tools
d. All of the above

Q31 - Mention the exit criteria for Integration Test

Q32 - Define and explain any three aspects of Code review?

Q33 - Match the following

Performed when application components near completion


- In Process Review
Coding begins at the close of this review - S/W requirements review
Looks at identifying defects as work progresses - Critical design review
CM Plan, Test plan and development plan are reviewed - Test readiness review

Q34 - Match the following


Retesting modules connected to the changed program- Black box testing
Determines how well the user can interact with the system
- White box Testing
Demonstrates key functional capabilities by testing a string
- System Testing of units
Path or Logic in a unit or program is known
- Regression Testing
Testing the functions of the program against the specifications
- Thread Testing
Test all the integrated components of an information system -
Usability Testing

Q35 - Explain with examples : Equivalence Partitioning, Boundary analysis, Error


guessing
Q36 - The intent of Quality Control Checklist is to ask questions (True/False)

Q37 - The National Quality Awards provide a basis for successfully


benchmarking against companies (True/False)

Q38 - The cost of maintaining a helpdesk is which type of cost


a. Prevention
b. Appraisal
c. Failure

Q39 - __________ is the most powerful Quality Control Tool


a. Control Chart
b. Pareto Analysis
c. Checklist
d. Histogram

Q40 - What type of change do u need before you can obtain a behavior change?

Lifestyle
Vocabulary
Internal
Management
Answers to above 1 to 40:

1) Correct.
2) Correct
3) Correct
4) Correct
5) True
6) Checklist
7) Software Risk
8) 20-50
9) 50-70
10) Functional Testing Status Report
11) Mccabe, Jcover
12) Repeat – Q8
13) Test Manager
14) Unit Testing
15) Stress Testing: Testing an application repeatedly over a specific time period
Load Testing: Testing an application for n number of users
Performance Testing: Testing the Response time for the transactions in an
application

16)
17) means that every branch direction is traversed atleast once
18) False
19) True
20) True
21) About Test Design, Test Environment, Test Strategy, Test Risk.
22) (Total STRs-STRs not Mapped)/Total STRs*100
(Total Test Cases-STRs that cannot be mapped to test cases)/Total Test Cases
23) True
24) Repeat – Q7.
25) Description of the condition or capability of the system - Requirement
Document that defines the overall testing objectives – Test Plan
Identifies all steps required to exercise the test cases – Test Procedure
Refines Test Approach and identifies the features to be
covered by the design and its associated tests - Test Design
Document containing the actual values for input, along - Test Cases
with the anticipated outputs
Statement of what the tester is expected to accomplish - Test Objective
during the testing activity
26) We will discuss in the telecon.
27) We will discuss in the telecon.
28) We will discuss in the telecon.
29) All the lines of code belongs to a unit program has to be executed atleast once.
All the Unit Test case has to be executed.
30) Test Coverage.
31) All the test cases belongs to that particular integration has to be executed and tracked to closure
32) Verify Coding Standards, to improve the performance, to verify how easy to maintain the code.
33) Repeat – Q3.
34) Retesting modules connected to the changed program – Regression Testing
Determines how well the user can interact with the system – Usability Testing
Demonstrates key functional capabilities by testing a string of units. - Thread Testing
Path or Logic in a unit or program is known – White box testing
Testing the functions of the program against the Specifications - Black Box testing
Test all the integrated components of an information system - System testing.
35)
36) The intent of Quality Control Checklist is to ask questions (True/False) – True
37) The National Quality Awards provide a basis for successfully benchmarking against companies
(True/False) – False
38) Failure cost
39) Checklist
40) Vocabulary

You might also like