You are on page 1of 92

Testing interview questions

1. What is a test plan? what are the components of test plan?


A software project test plan is a document that describes the objectives, scope,
approach, of a software testing effort. The completed document will help people
outside the test group understand the 'why' and 'how' of product validation.

Answer: objective, scope, entrance criteria, exit criteria, features to be tested, features
not to be tested, approach, item pass fail criteria, Suspension Criteria and Resumption
Requirements, Test Deliverables, Environmental Needs, Staffing and Training Needs,
Schedule, risk analysis, approval.

A document describing the scope, approach, resources, and schedule of intended testing
activities. It identifies test items, the features to be tested, the testing tasks, who will do
each task, and any risks requiring contingency planning.

2. How do you scope, organize, and execute a test project?


3. What is the role of QA in a development project?
4. What is the role of QA in a company that produces software?
5. Describe to me what you see as a process. Not a particular process, just the basics of having a
process.
6. Describe to me the Software Development Life Cycle as you would define it.
7. What are the properties of a good requirement?
8. How do you differentiate the roles of Quality Assurance Manager and Project
Manager?
9. Tell me about any quality efforts you have overseen or implemented. Describe some of the
challenges you faced and how you overcame them.
10. How do you deal with environments that are hostile to quality change efforts?
11. If you come onboard, give me a general idea of what your first overall tasks will be as
far as starting a quality effort.
12. What kinds of software testing have you done?
13. Have you ever created a test plan?
14. Have you ever written test cases or did you just execute those written by others?
15. What did your base your test cases?
16. How do you determine what to test?
17. How do you decide when you have 'tested enough?'
18. How do you test if you have minimal or no documentation about the product?
19. At what stage of the life cycle does testing begin in your opinion?
20. Realising you won't be able to test everything - how do you decide what to test first?
21. Where do you get your expected results?
22. In the past, I have been asked to verbally start mapping out a test plan for a common
situation, such as an ATM. The interviewer might say, "Just thinking out loud, if you were
tasked to test an ATM, what items might you test plan include?" These type questions are
not meant to be answered conclusively, but it is a good way for the interviewer to see
how you approach the task.
23. If you're given a program that will average student grades, what kinds of inputs would
you use?
24. What is the exact difference between Integration & System testing, give me examples
with your project.
System testing is high level testing, and integration testing is a lower level testing. Integration testing is
completed first, not the system testing. In other words, upon completion of integration testing, system
testing is started, and not vice versa. For integration testing, test cases are developed with the express
purpose of exercising the interfaces between the components. For system testing, on the other hand, the
complete system is configured in a controlled environment, and test cases are developed to simulate real
life scenarios that occur in a simulated real life test environment. The purpose of integration testing is to
ensure distinct components of the application still work in accordance to customer requirements. The
purpose of system testing, on the other hand, is to validate an application's accuracy and completeness in
performing the functions as designed, and to test all functions of the system that are required in real life.
25. How did you go about software testing a project?
26. How do you go about testing a web application?
27. Who in the company is responsible for Quality?
28. Who defines quality?
29. What is an equivalence class?
30. Is a "A fast database retrieval rate" a testable requirement?
31. Should we test every possible combination/scenario for a program?
32. Describe the role that QA plays in the software lifecycle.
33. What should Development require of QA?
34. What should QA require of Development?
35. Give me an example of the best and worst experiences you've had with QA.
36. How does unit testing play a role in the development / software lifecycle?
37. Explain some techniques for developing software components with respect to
testability.
38. Describe a past experience with implementing a test harness in the development of
software.
39. Have you ever worked with QA in developing test tools? Explain the participation
40. Development should have with QA in leveraging such test tools for QA use.
41. Give me some examples of how you have participated in Integration Testing.
42. Describe your personal software development process.
43. How do you know when your code has met specifications?
44. How do you know your code has met specifications when there are no specifications?
45. Describe your experiences with code analyzers.
46. How do you feel about cyclomatic complexity?
47. Who should test your code?
48. How do you survive chaos?
49. What processes/methodologies are you familiar with?
50. How can you use technology to solve problem?
51. How would you ensure 100% coverage during software testing?
52. How would you build a test team?
53. What are basic, core, practises for a QA specialist?
54. What do you like about QA?
55. What has not worked well in your previous QA experience and what would you
change?
56. How you will begin to improve the QA process?
57. What is UML and how to use it for software testing?
58. What is the responsibility of programmers vs QA?
59. Why should you care about objects and object-oriented testing?
60. What does 100% statement coverage mean?
61. At what stage of the development cycle software errors are least costly to correct?
62. How to monitor test progress?
63. What type of testing based specifically on a program code?
64. What type of testing based on any document that describes the "structure of the
software"?
65. Please describe test design techniques like: state-transition diagrams, decision tables,
activity diagrams.
66. Describe business process testing and what test design technique would you use for
it?
67. What is the value of a testing group? How do you justify your work and budget?
68. What is the role of the test group vis-à¶is documentation, tech support, and so forth?
69. How much interaction with users should testers have, and why?
70. How should you learn about problems discovered in the field, and what should you
learn from those problems?
71. What are the roles of glass-box and black-box testing tools?
72. What development model should programmers and the test group use?
73. How do you get programmers to build testability support into their code?
74. What are the key challenges of software testing?
75. Have you ever completely tested any part of a product? How?
76. Have you done exploratory or specification-driven testing?
77. Should every business test its software the same way?
78. Describe components of a typical test plan, such as tools for interactive products and
for database products, as well as cause-and-effect graphs and data-flow diagrams.
79. When have you had to focus on data integrity?
80. How do you prioritize testing tasks within a project?
81. How do you develop a test plan and schedule? Describe bottom-up and top-down
approaches.
82. Why did you ever become involved in QA/software testing?
83. What is the software testing lifecycle and explain each of its phases?
84. What was a problem you had in your previous assignment (testing if possible)? How did
you resolve it?
85. What are two of your strengths that you will bring to our QA/testing team?
86. What do you like most about Quality Assurance/Software Testing?
87. What do you like least about Quality Assurance/Testing?
88. What is the Waterfall Development Method and do you agree with all the steps?
89. What is the V-Model Development Method and do you agree with this model?
90. What is the Capability Maturity Model (CMM)? At what CMM level were the last few
companies you worked?
91. Could you tell me two things you did in your previous assignment (QA/Testing related
hopefully) that you are proud of?
92. What methodologies have you used to develop test cases?
93. In an application currently in production, one module of code is being modified. Is it
necessary to re- test the whole application or is it enough to just test functionality
associated with that module?
94. Define each of the following and explain how each relates to the other: Unit, System, and
Integration testing.
95. Explain the differences between White-box, Gray-box, and Black-box testing.
96. Define the following and explain their usefulness: Change Management, Configuration
Management, Version Control, and Defect Tracking.
97. What is ISO 9000? Have you ever been in an ISO shop?
98. What is the difference between a test strategy and a test plan?
A: The test strategy document is a formal description of how a software product will be tested. A test
strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes
the test strategy and reviews the plan with the project team. The test plan may include test cases,
conditions, the test environment, and a list of related tasks, pass/fail criteria and risk assessment.
Additional sections in the test strategy document include: A description of the required hardware and
software components, including test tools. This information comes from the test environment, including
test tool data. A description of roles and responsibilities of the resources required for the test and schedule
constraints. This information comes from man-hours and schedules. Testing methodology. This is based
on known standards. Functional and technical requirements of the application. This information comes
from requirements, change request, technical, and functional design documents. Requirements that the
system cannot provide, e.g. system limitations.
99. What is ISO 9003? Why is it important
100. What are ISO standards? Why are they important?
101. What is IEEE 829? (This standard is important for Software Test Documentation-Why?)
102. What is IEEE? Why is it important?
103. What is your experience with change control? Our development team has only 10
members. Do you think managing change is such a big deal for us?
104. Can you build a good audit trail using Compuware's QA Center products. Explain why.
105. What is the Difference between Project and Product testing?

106. What are the differences between interface and integration testing? Are system specification
and functional specification the same? What are the differences between system and functional
testing?
107. What is Multi Unit testing?
108. What are the different types, methodologies, approaches, methods in software testing
109. What is the difference between test techniques and test methodology?
One test methodology is a three-step process. Creating a test strategy, Creating a test
plan/design, and Executing tests. This methodology can be used and molded to your
organization's needs. Rob Davis believes that using this methodology is important in the
development and ongoing maintenance of his customers' applications.
What is exploratory testing?
Answer: exploratory testing is simultaneous learning, test plan and execution. Usually done by exp
testers. Done whenever we have vague requirements about the product and time constraints
110. What types of testing does non-functional testing?
Answer: Non-functional testing: This testing is used to test the other quality factors of
our build other than the usability testing and functionality testing. This testing
includes 7 types of testings: 1) Compatibility testing2) Configuration testing3) Load
testing4) Stress testing5) Storage testing6) Data-volume testing7) Installation testing

111. How to do the estimation in testing?


Answer: we have a standard template where we divide our testing time into X number of
parts. Then determine the testing part based on the use cases.
112. How to estimate a new application?
Answer: If you haven't used these scenarios before, and no historical data or experience
for those scenarios available, best way is to take some samples and run them to measure
the time.
113. How to estimate automation time?
Answer:
1. Time taken for tool evaluation and feasibility
2. Time taken for development of a utility function * number of functions
3. Total number of test cases/steps divided by test cases /steps that you can automate in a
single day.
114. How to estimate a scenario?
Answer:
1 - You look at how long these scenarios took last time
2 - You look at how long these testers took last time for something similar
3. And of course my guess because I know to do validation checking of a field in DM5
takes 1/2 hr so if there were 15 fields in that profile then would take an average tester 7
and ½ hrs.
4. Manual and automation testing should be taken into account also.
115. Basic structure of the Test Case Point analysis?
Answer: TCP Analysis uses a 7-step process consisting of the following stages:
1. Identify Use Cases
2. Identify Test Cases
3. Determine TCP for Test Case Generation
This includes designing well-defined test cases. The deliverables here are only the test
cases.
5. Determine TCP for Manual Execution
This execution model only involves executing the test cases already designed and
reporting the defects.
7. Determine Total TCP

1. What is the difference between use case, test case, test plan?
Use Case: It is prepared by Business analyst in the Functional Requirement
Specification(FRS), which are nothing but a steps which are given by the customer.

Test cases: It is prepared by test engineer based on the use cases from FRS to check the
functionality of an application thoroughly

Test Plan: Team lead prepares test plan, in it he represents the scope of the test, what to
test and what not to test, scheduling, what to test using automation etc.

2. How can we design the test cases from requirements? Do the requirements,
represent exact functionality of AUT?
Yes, requirements should represents exact functionality of AUT.
First of all you have to analyze the requirements very thoroughly in terms of
functionality. Then we have to thing about suitable test case design technique [Black Box
design techniques like Equivalence Class Partitioning (ECP), Boundary Valve Analysis
(BVA),Error guessing and Cause Effect Graphing] for writing the test cases.
By these concepts you should design a test case, which should have the capability of
finding the absence of defects.
25. What is Capture/Replay Tool?

A test tool that records test input as it is sent to the software under test. The input cases
stored can then be used to reproduce the test at a later time. Most commonly applied to
GUI test tools.

28. What is Code Complete?

Phase of development where functionality is implemented in entirety; bug fixes are all
that are left. All functions found in the Functional Specifications have been implemented.

30. What is Code Inspection?

A formal testing technique where the programmer reviews source code with a group who
ask questions analyzing the program logic, analyzing the code with respect to a checklist
of historically common programming errors, and analyzing its compliance with coding
standards.

32. What is Coding?

The generation of source code.

34. What is Component?

A minimal software item for which a separate specification is available.

41. What is Data Dictionary?

A database that contains definitions of all data items defined during analysis.

42. What is Data Flow Diagram?

A modeling notation that represents a functional decomposition of a system.

49. What is Emulator?

A device, computer program, or system that accepts the same inputs and produces the
same outputs as a given system.

74. What is Quality Audit?


A systematic and independent examination to determine whether quality activities and
related results comply with planned arrangements and whether these arrangements are
implemented effectively and are suitable to achieve objectives.

75. What is Quality Circle?

A group of individuals with related interests that meet at regular intervals to consider
problems or other matters related to the quality of outputs of a process and to the
correction of problems or to the improvement of quality.

77. What is Quality Management?

That aspect of the overall management function that determines and implements the
quality policy.

78. What is Quality Policy?

The overall intentions and direction of an organization as regards quality as formally


expressed by top management.

79. What is Quality System?

The organizational structure, responsibilities, procedures, processes, and resources for


implementing quality management.

80. What is Race Condition?

A cause of concurrency problems. Multiple accesses to a shared resource, at least one of


which is a write, with no mechanism used by either to moderate simultaneous access.

84. What is Release Candidate?

A pre-release version, which contains the desired functionality of the final version, but
which needs to be tested for bugs (which ideally should be removed before the final
version is released).

90. What is Software Requirements Specification?

A deliverable that describes all data, functional and behavioral requirements, all
constraints, and all validation requirements for software/

92. What is Static Analysis?

Analysis of a program carried out without executing the program.

93. What is Static Analyzer?


A tool that carries out static analysis.

99. What is Testability?

The degree to which a system or component facilitates the establishment of test criteria
and the performance of tests to determine whether those criteria have been met.

101. What is Test Bed?

An execution environment configured for testing. May consist of specific hardware, OS,
network topology, configuration of the product under test, other application or system
software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

102. What is Test Case?

Test Case is a commonly used term for a specific test. This is usually the smallest unit of
testing. A Test Case will consist of information such as requirements testing, test steps,
verification steps, prerequisites, outputs, test environment, etc. A set of inputs, execution
preconditions, and expected outcomes developed for a particular objective, such as to
exercise a particular program path or to verify compliance with a specific requirement.
Test Driven Development? Testing methodology associated with Agile Programming in
which every chunk of code is covered by unit tests, which must all pass all the time, in an
effort to eliminate unit-level and regression bugs during development. Practitioners of
TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the
production code.

103. What is Test Driver?

A program or test tool used to execute a tests. Also known as a Test Harness.

104. What is Test Environment?

The hardware and software environment in which tests will be run, and any other
software with which the software under test interacts when under test including stubs and
test drivers.

105. What is Test First Design?

Test-first design is one of the mandatory practices of Extreme Programming (XP).It


requires that programmers do not write any production code until they have first written a
unit test.

106. What is Test Harness?

A program or test tool used to execute a tests. Also known as a Test Driver.
108. What is Test Procedure?

A document providing detailed instructions for the execution of one or more test cases.

110. What is Test Specification?

A document specifying the test approach for a software feature or combination or


features and the inputs, predicted results and execution conditions for the associated tests.

111. What is Test Suite?

A collection of tests used to validate the behavior of a product. The scope of a Test Suite
varies from organization to organization. There may be several Test Suites for a
particular product for example. In most cases however a Test Suite is a high level
concept, grouping together hundreds or thousands of tests related by what they are
intended to test.

112. What is Test Tools?

Computer programs used in the testing of a system, a component of the system, or its
documentation.

115. What is Total Quality Management?

A company commitment to develop a process that achieves high quality product and
customer satisfaction.

123. What is Workflow Testing?

Scripted end-to-end testing which duplicates specific workflows which are expected to be
utilized by the end-user.

126. What is the best tester to developer ratio?

Reported tester: developer ratios range from 10:1 to 1:10. There's no simple answer. It
depends on so many things, Amount of reused code, number and type of interfaces,
platform, quality goals, etc. It also can depend on the development model. The more
specs, the less testers. The roles can play a big part also. Does QA own beta? Do you
include process auditors or planning activities? These figures can all vary very widely
depending on how you define 'tester' and 'developer'. In some organizations, a 'tester' is
anyone who happens to be testing software at the time -- such as their own. In other
organizations, a 'tester' is only a member of an independent test group. It is better to ask
about the test labor content than it is to ask about the tester/developer ratio. The test labor
content, across most applications is generally accepted as 50%, when people do honest
accounting. For life-critical software, this can go up to 80%.
127. How can new Software QA processes be introduced in an existing organization?

- A lot depends on the size of the organizattion and the risks involved. For large
organizations with high-risk (in terms of lives or property) projects, serious management
buy-in is required and a formalized QA process is necessary.
- Where the risk is lower, management and orrganizational buy-in and QA
implementation may be a slower, step-at-a-time process. QA processes should be
balanced with productivity so as to keep bureaucracy from getting out of hand.
- For small groups or projects, a more ad-hooc process may be appropriate, depending on
the type of customers and projects. A lot will depend on team leads or managers,
feedback to developers, and ensuring adequate communications among customers,
managers, developers, and testers.
- In all cases the most value for effort willl be in requirements management processes,
with a goal of clear, complete, testable requirement specifications or expectations.

136. What's the big deal about 'requirements'?

One of the most reliable methods of insuring problems, or failure, in a complex software
project is to have poorly documented requirements specifications. Requirements are the
details describing an application's externally-perceived functionality and properties.
Requirements should be clear, complete, reasonably detailed, cohesive, attainable, and
testable. A non-testable requirement would be, for example, 'user-friendly' (too
subjective). A testable requirement would be something like 'the user must enter their
previously-assigned password to access the application'. Determining and organizing
requirements details in a useful and efficient way can be a difficult effort; different
methods are available depending on the particular project. Many books are available that
describe various approaches to this task. Care should be taken to involve ALL of a
project's significant 'customers' in the requirements process. 'Customers' could be in-
house personnel or out, and could include end-users, customer acceptance testers,
customer contract officers, customer management, future software maintenance
engineers, salespeople, etc. Anyone who could later derail the project if their expectations
aren't met should be included if possible. Organizations vary considerably in their
handling of requirements specifications. Ideally, the requirements are spelled out in a
document with statements such as 'The product shall.....'. 'Design' specifications should
not be confused with 'requirements'; design specifications should be traceable back to the
requirements. In some organizations requirements may end up in high level project plans,
functional specification documents, in design documents, or in other documents at
various levels of detail. No matter what they are called, some type of documentation with
detailed requirements will be needed by testers in order to properly plan and execute
tests. Without such documentation, there will be no clear-cut way to determine if a
software application is performing correctly.

137. What steps are needed to develop and run software tests?
The following are some of the steps to consider:
- Obtain requirements, functional design, annd internal design specifications and other
necessary documents
- Obtain budget and schedule requirements - Determine project-related personnel and
thheir responsibilities, reporting requirements, required standards and processes (such as
release processes, change processes, etc.)
- Identify application's higher-risk aspectss, set priorities, and determine scope and
limitations of tests
- Determine test approaches and methods - unnit, integration, functional, system, load,
usability tests, etc.
- Determine test environment requirements (hhardware, software, communications, etc.)
-Determine testware requirements (record/plaayback tools, coverage analyzers, test
tracking, problem/bug tracking, etc.)
- Determine test input data requirements - Identify tasks, those responsible for taskks, and
labor requirements
- Set schedule estimates, timelines, milestoones
- Determine input equivalence classes, bounddary value analyses, error classes
- Prepare test plan document and have neededd reviews/approvals
- Write test cases
- Have needed reviews/inspections/approvals of test cases
- Prepare test environment and testware, obttain needed user manuals/reference
documents/configuration guides/installation guides, set up test tracking processes, set up
logging and archiving processes, set up or obtain test input data
- Obtain and install software releases
- Perform tests
- Evaluate and report results
- Track problems/bugs and fixes
- Retest as needed
- Maintain and update test plans, test casess, test environment, and testware through life
cycle

142. What can be done if requirements are changing continuously?

A common problem and a major headache.


- Work with the project's stakeholders earlyy on to understand how requirements might
change so that alternate test plans and strategies can be worked out in advance, if
possible.
- It's helpful if the application's initial design allows for some adaptability so that later
changes do not require redoing the application from scratch.
- If the code is well-commented and well-doccumented this makes changes easier for the
developers.
- Use rapid prototyping whenever possible too help customers feel sure of their
requirements and minimize changes.
- The project's initial schedule should alloow for some extra time commensurate with the
possibility of changes.
- Try to move new requirements to a 'Phase 22' version of an application, while using the
original requirements for the 'Phase 1' version.
- Negotiate to allow only easily-implementedd new requirements into the project, while
moving more difficult new requirements into future versions of the application.
- Be sure that customers and management undeerstand the scheduling impacts, inherent
risks, and costs of significant requirements changes. Then let management or the
customers (not the developers or testers) decide if the changes are warranted - after all,
that's their job.
- Balance the effort put into setting up auttomated testing with the expected effort
required to re-do them to deal with changes.
- Try to design some flexibility into automaated test scripts.
- Focus initial automated testing on applicaation aspects that are most likely to remain
unchanged.
- Devote appropriate effort to risk analysiss of changes to minimize regression testing
needs.
- Design some flexibility into test cases (tthis is not easily done; the best bet might be to
minimize the detail in the test cases, or set up only higher-level generic-type test plans)
- Focus less on detailed test plans and testt cases and more on ad hoc testing (with an
understanding of the added risk that this entails).

143. What if the project isn't big enough to justify extensive testing?

Consider the impact of project errors, not the size of the project. However, if extensive
testing is still not justified, risk analysis is again needed and the same considerations as
described previously in 'What if there isn't enough time for thorough testing?' apply. The
tester might then do ad hoc testing, or write up a limited test plan based on the risk
analysis.

144. What if the application has functionality that wasn't in the requirements?

It may take serious effort to determine if an application has significant unexpected or


hidden functionality, and it would indicate deeper problems in the software development
process. If the functionality isn't necessary to the purpose of the application, it should be
removed, as it may have unknown impacts or dependencies that were not taken into
account by the designer or the customer. If not removed, design information will be
needed to determine added testing needs or regression testing needs. Management should
be made aware of any significant added risks as a result of the unexpected functionality.
If the functionality only effects areas such as minor improvements in the user interface,
for example, it may not be a significant risk.

145. How can Software QA processes be implemented without stifling productivity?

By implementing QA processes slowly over time, using consensus to reach agreement on


processes, and adjusting and experimenting as an organization grows and matures,
productivity will be improved instead of stifled. Problem prevention will lessen the need
for problem detection, panics and burn-out will decrease, and there will be improved
focus and less wasted effort. At the same time, attempts should be made to keep
processes simple and efficient, minimize paperwork, promote computer-based processes
and automated tracking and reporting, minimize time required in meetings, and promote
training as part of the QA process. However, no one - especially talented technical types -
likes rules or bureacracy, and in the short run things may slow down a bit. A typical
scenario would be that more days of planning and development will be needed, but less
time will be required for late-night bug-fixing and calming of irate customers.

146. What if an organization is growing so fast that fixed QA processes are impossible?

This is a common problem in the software industry, especially in new technology areas.
There is no easy solution in this situation, other than:
- Hire good people
- Management should 'ruthlessly prioritize' quality issues and maintain focus on the
customer
- Everyone in the organization should be cleear on what 'quality' means to the customer

147. How does a client/server environment affect testing?

Client/server applications can be quite complex due to the multiple dependencies among
clients, data communications, hardware, and servers. Thus testing requirements can be
extensive. When time is limited (as it usually is) the focus should be on integration and
system testing. Additionally, load/stress/performance testing may be useful in
determining client/server application limitations and capabilities. There are commercial
tools to assist with such testing.

148.How can World Wide Web sites be tested?

Web sites are essentially client/server applications - with web servers and 'browser'
clients. Consideration should be given to the interactions between html pages, TCP/IP
communications, Internet connections, firewalls, applications that run in web pages (such
as applets, javascript, plug-in applications), and applications that run on the server side
(such as cgi scripts, database interfaces, logging applications, dynamic page generators,
asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions
of each, small but sometimes significant differences between them, variations in
connection speeds, rapidly changing technologies, and multiple standards and protocols.
The end result is that testing for web sites can become a major ongoing effort. Other
considerations might include:
- What are the expected loads on the server (e.g., number of hits per unit time?), and what
kind of performance is required under such loads (such as web server response time,
database query response times). What kinds of tools will be needed for performance
testing (such as web load testing tools, other tools already in house that can be adapted,
web robot downloading tools, etc.)?
- Who is the target audience? What kind of bbrowsers will they be using? What kind of
connection speeds will they by using? Are they intra- organization (thus with likely high
connection speeds and similar browsers) or Internet-wide (thus with a wide variety of
connection speeds and browser types)?
- What kind of performance is expected on thhe client side (e.g., how fast should pages
appear, how fast should animations, applets, etc. load and run)?
- Will down time for server and content mainntenance/upgrades be allowed? how much?
- What kinds of security (firewalls, encrypttions, passwords, etc.) will be required and
what is it expected to do? How can it be tested?
- How reliable are the site's Internet conneections required to be? And how does that
affect backup system or redundant connection requirements and testing?
- What processes will be required to manage updates to the web site's content, and what
are the requirements for maintaining, tracking, and controlling page content, graphics,
links, etc.?
- Which HTML specification will be adhered tto? How strictly? What variations will be
allowed for targeted browsers?
- Will there be any standards or requirementts for page appearance and/or graphics
throughout a site or parts of a site??
- How will internal and external links be vaalidated and updated? how often?
- Can testing be done on the production systtem, or will a separate test system be
required? How are browser caching, variations in browser option settings, dial-up
connection variabilities, and real-world internet 'traffic congestion' problems to be
accounted for in testing?
- How extensive or customized are the serverr logging and reporting requirements; are
they considered an integral part of the system and do they require testing?
- How are cgi programs, applets, javascriptss, ActiveX components, etc. to be
maintained, tracked, controlled, and tested?
- Pages should be 3-5 screens max unless conntent is tightly focused on a single topic. If
larger, provide internal links within the page.
- The page layouts and design elements shoulld be consistent throughout a site, so that it's
clear to the user that they're still within a site.
- Pages should be as browser-independent as possible, or pages should be provided or
generated based on the browser-type.
- All pages should have links external to thhe page; there should be no dead-end pages.
- The page owner, revision date, and a link to a contact person or organization should be
included on each page.

149. How is testing affected by object-oriented designs?

Well-engineered object-oriented design can make it easier to trace from code to internal
design to functional design to requirements. While there will be little affect on black box
testing (where an understanding of the internal design of the application is unnecessary),
white-box testing can be oriented to the application's objects. If the application was well-
designed this can simplify test design.

152. What kinds of testing should be considered?


unit testing - the most 'micro' scale of testing; to test particular functions or code
modules. Typically done by the programmer and not by testers, as it requires detailed
knowledge of the internal program design and code. Not always easily done unless the
application has a well-designed architecture with tight code; may require developing test
driver modules or test harnesses.
incremental integration testing - continuous testing of an application as new functionality
is added; requires that various aspects of an application's functionality be independent
enough to work separately before all parts of the program are completed, or that test
drivers be developed as needed; done by programmers or by testers.
integration testing - testing of combined parts of an application to determine if they
function together correctly. The 'parts' can be code modules, individual applications,
client and server applications on a network, etc. This type of testing is especially relevant
to client/server and distributed systems. functional testing - black-box type testing geared
to functional requirements of an application; this type of testing should be done by
testers. This doesn't mean that the programmers shouldn't check that their code works
before releasing it (which of course applies to any stage of testing.) system testing -
black-box type testing that is based on overall requirements specifications; covers all
combined parts of a system.
regression testing - re-testing after fixes or modifications of the software or its
environment. It can be difficult to determine how much re-testing is needed, especially
near the end of the development cycle. Automated testing tools can be especially useful
for this type of testing.
install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
failover testing - typically used interchangeably with 'recovery testing'
security testing - testing how well the system protects against unauthorized internal or
external access, willful damage, etc; may require sophisticated testing techniques.
151. Why is it often hard for management to get serious about quality assurance?

Solving problems is a high-visibility process; preventing problems is low-visibility.


This is illustrated by an old parable:
In ancient China there was a family of healers, one of whom was known throughout the
land and employed as a physician to a great lord. The physician was asked which of his
family was the most skillful healer. He replied, "I tend to the sick and dying with drastic
and dramatic treatments, and on occasion someone is cured and my name gets out among
the lords." "My elder brother cures sickness when it just begins to take root, and his skills
are known among the local peasants and neighbors." "My eldest brother is able to sense
the spirit of sickness and eradicate it before it takes form. His name is unknown outside
our home."

You are the test manager starting on system testing. The development team says that due
to a change in the requirements, they will be able to deliver the system for SQA 5 days
past the deadline. You cannot change the resources (work hours, days, or test tools).
What steps will you take to be able to finish the testing in time?
Your company is about to roll out an e-commerce application. It’s not possible to test the
application on all types of browsers on all platforms and operating systems. What steps
would you take in the testing environment to reduce the business risks and commercial
risks?
In your organization, testers are delivering code for system testing without performing
unit testing. Give an example of test policy:
Policy statement
Methodology
Measurement
Testers in your organization are performing tests on the deliverables even after significant
defects have been found. This has resulted in unnecessary testing of little value, because
re-testing needs to be done after defects have been rectified. You are going to update the
test plan with recommendations on when to halt testing. Wwhat recommendations are
you going to make?
How do you measure:
Test Effectiveness
Test Efficiency
You found out the senior testers are making more mistakes then junior testers; you need
to communicate this aspect to the senior tester. Also, you don’t want to lose this tester.
How should one go about constructive criticism?
You are assigned to be the test lead for a new program that will automate take-offs and
landings at an airport. How would you write a test strategy for this new program?
When should you begin test planning?
When should you begin testing?
How do you scope out the size of the testing effort?
How many hours a week should a tester work?
How should your staff be managed? How about your overtime?
How do you estimate staff requirements?
What do you do (with the project tasks) when the schedule fails?
How do you handle conflict with programmers?
How do you know when the product is tested well enough?
What characteristics would you seek in a candidate for test-group manager?
What do you think the role of test-group manager should be? Relative to senior
management? Relative to other technical groups in the company? Relative to your staff?
How do your characteristics compare to the profile of the ideal manager that you just
described?
How does your preferred work style work with the ideal test-manager role that you just
described? What is different between the way you work and the role you described?
Who should you hire in a testing group and why?
Can testability features be added to the product code?
Do testers and developers work cooperatively and with mutual respect?
What are the benefits of creating multiple actions within any virtual user script?
Who should be involved in each level of testing? What should be their responsibilities?
You have more verifiable QA experience testing:
It insures that every piece of code written is tested in some way
Tests give confidence that every part of the code is working
Your experience with Programming within the context of Quality Assurance is:
N/A - I have no programming experience in C, C++ or Java.
You have done some programming in my role as a QA Engineer, and am comfortable
meeting such requirements in Java, C and C++ or VC++.
You have developed applications of moderate complexity that have taken up to three
months to complete.
Your skill in maintaining and debugging an application is best described as:
N/A - You have not participated in debugging a product.
You have worked under the mentorship of a team lead to learn various debugging
techniques and strategies.
You have both an interest in getting to the root of a problem and understand the steps
You need to take to document it fully for the developer.
You am experienced in working with great autonomy on debugging/maintenance efforts
and have a track record of successful projects You can discuss.
Why does testing not prove a program is 100 percent correct (except for extremely simple
programs)?
Because we can only test a finite number of cases, but the program may have an infinite
number of possible combinations of inputs and outputs
Because the people who test the program are not the people who write the code
Because the program is too long
All of the above
We CAN prove a program is 100 percent correct by testing
Which statement regarding Validation is correct:
It refers to the set of activities that ensures the software has been built according to the
customer's requirements.
It refers to the set of activities that ensure the software correctly implements specific
functions.
Are regression tests required or do you feel there is a better use for resources?
Our software designers use UML for modeling applications. Based on their use cases, we
would like to plan a test strategy. Do you agree with this approach or would this mean
more effort for the testers.
Tell me about a difficult time you had at work and how you worked through it.
Give me an example of something you tried at work but did not work out so you had to
go at things another way.
How can one file compare future dated output files from a program which has change,
against the baseline run which used current date for input. The client does not want to
mask dates on the output files to allow compares. - Answer-Rerun baseline and future
date input files same # of days as future dated run of program with change. Now run a
file compare against the baseline future dated output and the changed programs' future
dated output.

What methodologies have you used to develop test cases?


In an application currently in production, one module of code is being modified. Is it
necessary to re- test the whole application or is it enough to just test functionality
associated with that module?
Define each of the following and explain how each relates to the other: Unit, System, and
Integration testing.
Explain the differences between White-box, Gray-box, and Black-box testing.
How do you go about going into a new organization? How do you assimilate?
Define the following and explain their usefulness: Change Management, Configuration
Management, Version Control, and Defect Tracking.
When are you done testing?
Can you build a good audit trail using Compuware's QACenter products. Explain why.
How important is Change Management in today's computing environments?
Do you think tools are required for managing change. Explain and please list some
tools/practices which can help you managing change.
We believe in ad-hoc software processes for projects. Do you agree with this? Please
explain your answer.
When is a good time for system testing?
How do you determine what to test?
How do you decide when you have 'tested enough?'
What criteria would you use to select Web transactions for load testing?
What are the reasons why parameterization is necessary when load testing the Web server
and the database server?
How can data caching have a negative effect on load testing results?
What usually indicates that your virtual user script has dynamic data that is dependent on
you parameterized fields?
What are the various status reports that you need generate for Developers and Senior
Management?
Write a sample Test Policy?
Explain the various types of testing after arranging them in a chronological order?
Explain what test tools you will need for client-server testing and why?
Explain what test tools you will need for Web app testing and why?
Explain pros and cons of testing done development team and testing by an independent
team?
When should testing start in a project? Why?
How do you go about testing a web application?
Who in the company is responsible for Quality?
The top management was feeling that when there are any changes in the technology being
used, development schedules etc, it was a waste of time to update the Test Plan. Instead,
they were emphasizing that you should put your time into testing than working on the test
plan. Your Project Manager asked for your opinion. You have argued that Test Plan is
very important and you need to update your test plan from time to time. It’s not a waste
of time and testing activities would be more effective when you have your plan clear. Use
some metrics. How you would support your argument to have the test plan consistently
updated all the time.
The QAI is starting a project to put the CSTE certification online. They will use an
automated process for recording candidate information, scheduling candidates for exams,
keeping track of results and sending out certificates. Write a brief test plan for this new
project. The project had a very high cost of testing. After going in detail, someone found
out that the testers are spending their time on software that doesn’t have too many
defects. How will you make sure that this is correct?
What happens to the test plan if the application has a functionality not mentioned in the
requirements?
You are given two scenarios to test. Scenario 1 has only one terminal for entry and
processing whereas scenario 2 has several terminals where the data input can be made.
Assuming that the processing work is the same, what would be the specific tests that you
would perform in Scenario 2, which you would not carry on Scenario 1?
What is the need for Test Planning?
What are the various status reports you will generate to Developers and Senior
Management?
Define and explain any three aspects of code review?
Explain 5 risks in an e-commerce project. Identify the personnel that must be involved in
the risk analysis of a project and describe their duties. How will you prioritize the risks?
What is an equivalence class?
Is a "A fast database retrieval rate" a testable requirement?
Should we test every possible combination/scenario for a program?
In case anybody cares, here are the questions that I will be asking:
Describe the role that QA plays in the software lifecycle.
What should Development require of QA?
What should QA require of Development?
Give me an example of the best and worst experiences you've had with QA.
How does unit testing play a role in the development / software lifecycle?
Explain some techniques for developing software components with respect to testability.
What is the relationship between test scripts and test cases?
What goes into a test package?
What test data would you need to test that a specific date occurs on a specific day of
week?
It is the eleventh hour and we have no test scripts, cases, or data. What would you do
first?

What would you do if management pressure is stating that testing is complete and you
feel differently?
Why did you ever become involved in QA/testing?
What is the testing lifecycle and explain each of its phases?

155. what are the tables in testplans and testcases?

test plan is a document that contains the scope, approach, test design and test strategies. it
includes the following:-
1. test case identifier
2. scope
3.features to be tested
4. features not to be tested.
5. test strategy.
6. test approach
7. test deliverables
8. responsibilities.
9 staffing and training
10.risk and contingencies
11. approval

while a test case is a noted/documented set of steps/activities that are carried out or
executed on the software in order to confirm its functionality/behavior to certain set of
inputs.

156. what are the table contents in testplans and test cases?

test plan is a document which is prepared with the details of the testing priority. a test
plan generally includes:

1. objective of testing
2. scope of testing
3. reason for testing
4. timeframe
5. environment
6. entrance and exit criteria
7. risk factors involved
8. deliverables

180. how will you test the field that generates auto numbers of aut when we click the
button 'new" in the application?

we can create a textfile in a certain location, and update the auto generated value each
time we run the test and compare the currently generated value with the previous one will
be one solution.

181. how will you evaluate the fields in the application under test using automation
tool?

we can use verification points(rational robot) to validate the fields .ex.using


objectdata,objectdata properties vp we can validate fields.

182. can we perform the test of single application at the same time using different
tools on the same machine?

no. the testing tools will be in the ambiguity to determine which browser is opened by
which tool.

185. how to test the web applications?

the basic difference in webtesting is here we have to test for url's coverage and links
coverage. using winrunner we can conduct webtesting. but we have to make sure that
webtest option is selected in "add in manager". using wr we cannot test xml objects.
186. what are the problems encountered during the testing the application
compatibility on different browsers and on different operating systems

font issues,alignment issues

188. how exactly the testing the application compatibility on different browsers and
on different operating systems is done

please submit your suggestion to our forum at http://www.fyicenter.com

189. how testing is proceeded when srs or any other docccument is not given?

if srs is not there we can perform exploratory testing. in exploratory testing the basic
module is executed and depending on its results, the next plan is executed.

190. how do we test for severe memory leakages ?

by using endurance testing . endurance testing means checking for memory leaks or other
problems that may occur with prolonged execution.

193. how do u do usability testing,security testing,installation testing,adhoc,safety


and smoke testing?

please submit your suggestion to our forum at http://www.fyicenter.com

194. what is memory leaks and buffer overflows ?

memory leaks means incomplete deallocation - are bugs that happen very often. buffer
overflow means data sent as input to the server that overflows the boundaries of the input
area, thus causing the server to misbehave. buffer overflows can be used.

software quality assurance interview questions only (1)

• our software designers use uml for modeling applications. based on their use
cases, we would like to plan a test strategy. do you agree with this approach or
would this mean more effort for the testers.
• how can one file compare future dated output files from a program which has
change, against the baseline run which used current date for input. the client does
not want to mask dates on the output files to allow compares
• what are basic, core, practices for a qa specialist?

software quality assurance interview questions only (2)

• what is the value of a testing group? how do you justify your work and budget?
• what is the role of the test group vis-à-vis documentation, tech support, and so
forth?
• how much interaction with users should testers have, and why?
• how should you learn about problems discovered in the field, and what should
you learn from those problems?
• what are the roles of glass-box and black-box testing tools?
• what development model should programmers and the test group use?
• how do you get programmers to build testability support into their code?
• what are the key challenges of testing?
• have you ever completely tested any part of a product? how?
• have you done exploratory or specification-driven testing?
• should every business test its software the same way?
• describe components of a typical test plan, such as tools for interactive products
and for database products, as well as cause-and-effect graphs and data-flow
diagrams.
• when have you had to focus on data integrity?
• how do you prioritize testing tasks within a project?
• how do you develop a test plan and schedule? describe bottom-up and top-down
approaches.
• when should you begin test planning?
• how do you know when the product is tested well enough?
• what characteristics would you seek in a candidate for test-group manager?
• what do you think the role of test-group manager should be? relative to senior
management? relative to other technical groups in the company? relative to your
staff?
• how do your characteristics compare to the profile of the ideal manager that you
just described?
• how does your preferred work style work with the ideal test-manager role that you
just described? what is different between the way you work and the role you
described?
• who should you hire in a testing group and why?
• how do you estimate staff requirements?
• why did you ever become involved in qa/testing?
• what are two of your strengths that you will bring to our qa/testing team?

software quality assurance interview questions only (4)

• when do you know you have tested enough?


• what did you include in a test plan?
• how do you scope, organize, and execute a test project?
• what is the role of qa in a development project?
• what is the role of qa in a company that produces software?
• describe to me what you see as a process. not a particular process, just the basics
of having a process.
• describe to me when you would consider employing a failure mode and effect
analysis.
• describe to me the software development life cycle as you would define it.
• how do you differentiate the roles of quality assurance manager and project
manager?
• tell me about any quality efforts you have overseen or implemented. describe
some of the challenges you faced and how you overcame them.
• how do you deal with environments that are hostile to quality change efforts?
• in general, how do you see automation fitting into the overall process of testing?
• if you come onboard, give me a general idea of what your first overall tasks will
be as far as starting a quality effort.
• what kinds of testing have you done?
• you are the test manager starting on system testing. the development team says
that due to a change in the requirements, they will be able to deliver the system
for sqa 5 days past the deadline. you cannot change the resources (work hours,
days, or test tools). what steps will you take to be able to finish the testing in
time?
• your company is about to roll out an e-commerce application. it’s not possible to
test the application on all types of browsers on all platforms and operating
systems. what steps would you take in the testing environment to reduce the
business risks and commercial risks?
• in your organization, testers are delivering code for system testing without
performing unit testing. give an example of test policy:
o policy statement
o methodology
o measurement
• testers in your organization are performing tests on the deliverables even after
significant defects have been found. this has resulted in unnecessary testing of
little value, because re-testing needs to be done after defects have been rectified.
you are going to update the test plan with recommendations on when to halt
testing. wwhat recommendations are you going to make?
• how do you measure:
test effectiveness
test efficiency
• you found out the senior testers are making more mistakes then junior testers; you
need to communicate this aspect to the senior tester. also, you don’t want to lose
this tester. how should one go about constructive criticism?
• you are assigned to be the test lead for a new program that will automate take-offs
and landings at an airport. how would you write a test strategy for this new
program?

software quality assurance interview questions only (7)

• who should be involved in each level of testing? what should be their


responsibilities?

• your professional experience debugging, developing test cases and running


system tests for developed subsystems and features is:
a. n/a - i do not have experience in these areas.
b. i have this experience on 1 to 3 commercial product launches or product
integrations.
c. i have this experience on 4 to 6 commercial product launches or product
integrations.
d. i have this experience on 7 to 10 or more commercial product launches or
product integrations.
• you have personally created the following number of test plans or test cases:

a. n/a - i would be new to creating test plans or test cases


b. for 1 to 3 product releases
c. for 4 to 6 product releases
d. for 7 to 10 product releases
• what is an advantage of black box testing over white box testing:

a. tests give confidence that the program meets its specifications


b. tests can be done while the program is being written instead of waiting
until it is finished
c. it insures that every piece of code written is tested in some way
d. tests give confidence that every part of the code is working

• your experience with programming within the context of quality assurance is:

a. n/a - i have no programming experience in c, c++ or java.


b. you have done some programming in my role as a qa engineer, and am
comfortable meeting such requirements in java, c and c++ or vc++.
c. you have developed applications of moderate complexity that have taken
up to three months to complete.

• why does testing not prove a program is 100 percent correct (except for extremely
simple programs)?

a. because we can only test a finite number of cases, but the program may have an infinite
number of possible combinations of inputs and outputs
b. because the people who test the program are not the people who write the
code
c. because the program is too long
d. all of the above
e. we can prove a program is 100 percent correct by testing
• which statement regarding validation is correct:

a. it refers to the set of activities that ensures the software has been built according to the
customer's requirements.
b. it refers to the set of activities that ensure the software correctly
implements specific functions.
software quality assurance interview questions only (8)

• which of the following testing strategies ignores the internal structure of the
software?

a. interface testing
b. top down testing
c. white box testing
d. black box testing
e. sandwich testing
• regarding your experience with xml:

a. n/a - you would be new to using xml.


b. you have a basic understanding of its use.
c. you have experience using xml to transfer and transform data.>
d. ou have significant experience creating xml schema and creating
applications for data transfer.

• regarding the use of a computer, you am:

a. an expert with computers, the internet and windows, and am often asked to help others.
b. new to computers and would need a little help to get started.
c. comfortable with e-mail and the internet, but would need help with other
applications required for the position.
d. comfortable with e-mail and a variety of computer software, but not an
expert.
• your knowledge and experience in linux is:

a. n/a - you have no direct linux operating system experience. d need help to become
functionally proficient.
b. you have a good understanding of linux and run this os on your home pc.
c. you have experience with multiple linux variants and feel as comfortable
with it as most people do working in windows.

software quality assurance interview questions only (9)

• our software designers use uml for modeling applications. based on their use
cases, we would like to plan a test strategy. do you agree with this approach or
would this mean more effort for the testers.
• how can one file compare future dated output files from a program which has
change, against the baseline run which used current date for input. the client does
not want to mask dates on the output files to allow compares. - answer-rerun
baseline and future date input files same # of days as future dated run of program
with change. now run a file compare against the baseline future dated output and
the changed programs' future dated output.
software quality assurance interview questions only 10)

• how do you determine what to test?


• how do you decide when you have 'tested enough?'
• how do you test if you have minimal or no documentation about the product?
• at what stage of the life cycle does testing begin in your opinion?
• realising you won't be able to test everything - how do you decide what to test
first?

software quality assurance interview questions only (11)

• what tools are available for support of testing during software development life
cycle?

• what processes/methodologies are you familiar with?


• how can you use technology to solve problem?
• how to find that tools work well with your existing system?
• how would you ensure 100% coverage of testing?
• how would you build a test team?
• what are two primary goals of testing?
• if your company is going to conduct a review meeting, who should be on the
review committe and why?
• write any three attributes which will impact the testing process?
• you are a tester for testing a large system. the system data model is very large
with many attributes and there are a lot of inter-dependencies within the fields.
what steps would you use to test the system and also what are the effects of the
steps you have taken on the test plan?
• explain and provide examples for the following black box techniques?
o boundary value testing
o equivalence testing
o error guessing
• describe a past experience with implementing a test harness in the development of
software.
• have you ever worked with qa in developing test tools? explain the participation
development should have with qa in leveraging such test tools for qa use.
• give me some examples of how you have participated in integration testing.
• describe your personal software development process.
• how do you know when your code has met specifications?
• how do you know your code has met specifications when there are no
specifications?
• describe your experiences with code analyzers.

4.Why is it often hard for management to get serious about quality assurance?
* Solving problems is a high-visibility process; preventing problems is low-visibility.
This is illustrated by an old parable: In ancient China there was a family of healers,
one of whom was known throughout the land and employed as a physician to a great
lord.

What is 'Software Testing'?


Testing involves operation of a system or application under controlled conditions and
evaluating the results (eg, 'if the user is in interface A of the application while using
hardware B, and does C, then D should happen'). The controlled conditions should
include both normal and abnormal conditions. Testing should intentionally attempt to
make things go wrong to determine if things happen when they shouldn't or things
don't happen when they should. It is oriented to 'detection'. (See the Bookstore
section's 'Software Testing' category for a list of useful books on Software Testing.)

• Organizations vary considerably in how they assign responsibility for QA and


testing. Sometimes they're the combined responsibility of one group or individual.
Also common are project teams that include a mix of testers and developers who
work closely together, with overall QA processes monitored by project managers.
It will depend on what best fits an organization's size and business structure.

Return to top of this page's FAQ list

How can new Software QA processes be introduced in an existing organization?

• A lot depends on the size of the organization and the risks involved. For large
organizations with high-risk (in terms of lives or property) projects, serious
management buy-in is required and a formalized QA process is necessary.
• Where the risk is lower, management and organizational buy-in and QA
implementation may be a slower, step-at-a-time process. QA processes should be
balanced with productivity so as to keep bureaucracy from getting out of hand.
• For small groups or projects, a more ad-hoc process may be appropriate,
depending on the type of customers and projects. A lot will depend on team leads
or managers, feedback to developers, and ensuring adequate communications
among customers, managers, developers, and testers.
• The most value for effort will often be in (a) requirements management processes,
with a goal of clear, complete, testable requirement specifications embodied in
requirements or design documentation, or in 'agile'-type environments extensive
continuous coordination with end-users, (b) design inspections and code
inspections, and (c) post-mortems/retrospectives.
• Other possibilities include incremental self-managed team approaches such as
'Kaizen' methods of continuous process improvement, the Deming-Shewhart
Plan-Do-Check-Act cycle, and others.

What kinds of testing should be considered?


• incremental integration testing - continuous testing of an application as new
functionality is added; requires that various aspects of an application's
functionality be independent enough to work separately before all parts of the
program are completed, or that test drivers be developed as needed; done by
programmers or by testers.
• integration testing - testing of combined parts of an application to determine if
they function together correctly. The 'parts' can be code modules, individual
applications, client and server applications on a network, etc. This type of testing
is especially relevant to client/server and distributed systems.
• system testing - black-box type testing that is based on overall requirements
specifications; covers all combined parts of a system.
• install/uninstall testing - testing of full, partial, or upgrade install/uninstall
processes.
• failover testing - typically used interchangeably with 'recovery testing'

What is SEI? CMM? CMMI? ISO? IEEE? ANSI? Will it help?

• SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by


the U.S. Defense Department to help improve software development processes.
• CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity
Model Integration'), developed by the SEI. It's a model of 5 levels of process
'maturity' that determine effectiveness in delivering quality software. It is geared
to large organizations such as large U.S. Defense Department contractors.
However, many of the QA processes involved are appropriate to any organization,
and if reasonably applied can be helpful. Organizations can receive CMMI ratings
by undergoing assessments by qualified auditors.

Level 1 - characterized by chaos, periodic panics, and heroic


efforts required by individuals to successfully
complete projects. Few if any processes in place;
successes may not be repeatable.

Level 2 - software project tracking, requirements management,


realistic planning, and configuration management
processes are in place; successful practices can
be repeated.

Level 3 - standard software development and maintenance processes


are integrated throughout an organization; a Software
Engineering Process Group is is in place to oversee
software processes, and training programs are used to
ensure understanding and compliance.

Level 4 - metrics are used to track productivity, processes,


and products. Project performance is predictable,
and quality is consistently high.
Level 5 - the focus is on continouous process improvement. The
impact of new processes and technologies can be
predicted and effectively implemented when required.

Perspective on CMM ratings: During 1997-2001, 1018 organizations


were assessed. Of those, 27% were rated at Level 1, 39% at 2,
23% at 3, 6% at 4, and 5% at 5. (For ratings during the period
1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and
0.4% at 5.) The median size of organizations was 100 software
engineering/maintenance personnel; 32% of organizations were
U.S. federal contractors or agencies. For those rated at
Level 1, the most problematical key process area was in
Software Quality Assurance.

• ISO = 'International Organisation for Standardization' - The ISO 9001:2000


standard (which replaces the previous standard of 1994) concerns quality systems
that are assessed by outside auditors, and it applies to many kinds of production
and manufacturing organizations, not just software. It covers documentation,
design, development, production, testing, installation, servicing, and other
processes. The full set of standards consists of: (a)Q9001-2000 - Quality
Management Systems: Requirements; (b)Q9000-2000 - Quality Management
Systems: Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management
Systems: Guidelines for Performance Improvements. To be ISO 9001 certified, a
third-party auditor assesses an organization, and certification is typically good for
about 3 years, after which a complete reassessment is required. Note that ISO
certification does not necessarily indicate quality products - it indicates only that
documented processes are followed. Also see http://www.iso.org/ for the latest
information. In the U.S. the standards can be purchased via the ASQ web site at
http://www.asq.org/quality-press/
ISO 9126 defines six high level quality characteristics that can be used in
software evaluation. It includes functionality, reliability, usability, efficiency,
maintainability, and portability.
• IEEE = 'Institute of Electrical and Electronics Engineers' - among other things,
creates standards such as 'IEEE Standard for Software Test Documentation'
(IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit Testing
(IEEE/ANSI Standard 1008), 'IEEE Standard for Software Quality Assurance
Plans' (IEEE/ANSI Standard 730), and others.
• ANSI = 'American National Standards Institute', the primary industrial standards
body in the U.S.; publishes some software-related standards in conjunction with
the IEEE and ASQ (American Society for Quality).
• Other software development/IT management process assessment methods besides
CMMI and ISO 9000 include SPICE, Trillium, TickIT, Bootstrap, ITIL, MOF,
and CobiT.
2. What is the value of a testing group? How do you justify your work and budget?
3. What is the role of the test group vis-à¶is documentation, tech support, and so forth?
4. How much interaction with users should testers have, and why?
6. What are the roles of glass-box and black-box testing tools?
7. What issues come up in test automation, and how do you manage them?
8. What development model should programmers and the test group use?
9. How do you get programmers to build testability support into their code?
13. Have you done exploratory or specification-driven testing?
14. Should every business test its software the same way?
16. Describe components of a typical test plan, such as tools for interactive products and
for database products, as well as cause-and-effect graphs and data-flow diagrams.
17. When have you had to focus on data integrity?

20. How do you develop a test plan and schedule? Describe bottom-up and top-down
approaches.

1. What is a "Good Software Tester"?


2. Could you tell me two things you did in your previous assignment (QA/Testing
related hopefully) that you are proud of?

3. What types of testing do testers perform?


A. Two types of testing 1.White Box Testing 2.Black Box Testing.

4. What is the Outcome of Testing?


A. The outcome of testing will be a stable application which meets the customer
Req's.

5. What kind of testing have you done?


A. Usability,Functionality,System testing,regression testing,UAT
(it depends on the person).

7. What are the entry criteria for Functionality and Performance testing?
A. Entry criteria for Functionality testing is Functional Specification /BRS
(CRS)/User Manual.An integrated application, Stable for testing.

Entry criteria for Performance testing is successfully of functional testing,once all the
requirements related to functional are covered and tested, and approved or validated.

9. Why do you go for White box testing, when Black box testing is available?
A. A benchmark that certifies Commercial (Business) aspects and also functional
(technical)aspects is objectives of black box testing. Here loops, structures, arrays,
conditions,files, etc are very micro level but they arc Basement for any application,
So White box takes these things in Macro level and test these things

Even though Black box testing is available,we should go for White box testing also,to
check the correctness of code and for integrating the modules.
11.When to start and Stop Testing?
A. This can be difficult to determine. Many modern software applications are so
complex,and run in such an interdependent environment, that complete testing can
never be done.
Common factors in deciding when to stop are:

Deadlines (release deadlines, testing deadlines, etc.)


Test cases completed with certain percentage passed
Test budget depleted
Coverage of code/functionality/requirements reaches a specified point
Bug rate falls below a certain level
Beta or alpha testing period ends

13.What is Baseline document?


A. The review and approved document is called as baseline document (i.e)Test
plan,SRS.

21.What are the various levels of testing?


A. The various levels of testing like
1· Ad - Hoc testing
2. Sanity Test
3. Regression Testing
4. Functional testing
5· Web Testing

22.What are the types of testing you know and you experienced?
A. I am experienced in Black Box testing.

24.After completing testing, what would you deliver to the client?


A. It is depend upon what you have specified in the test plan document.
the contents delivers to the clients is nothing but Test Deliverables.
1.Test plan document 2.Master test case document 3.Test summary Report.
4.Defect Reports.

26.What is a Data Guidelines?

27.Why do you go for Test Bed?


A. We will prepare test bed bcoz first we need to identify under which
environment (Hardware,software) the application will run smoothly,then
only we can run the application smoothly without any intereptions.

30.What is a test case?


A. A test case is a document that describes an input, action, or event and an expected
response, to determine if a feature of an application is working correctly.
31.What is a test condition?
A. The condition required to test a feature.(pre condition)

33.What is the test data?


A. Test data means the input data(valid,invalid data) giving to check the feature
of an application is working correctly.

36.What are the different types of testing techniques?


A. 1.white Box testing 2.Black Box testing.

37.What are the different types of test case techniques?


A. 1.Equilance Partition. 2.Boundary Value Analysis. 3.Error guesing.

38.What are the risks involved in testing?

39.Differentiate Test bed and Test Environment?


A. Both are same.

47.What is the difference between functional spec. and Business requirement


specification?
A.
48.What is the difference between unit testing and integration testing?
A. Unit Testing:It is a testing activity typically done by the developers not by
testers,as it requires detailed knowledge of the internal program design and code. Not
always easily done unless the application has a well-designed architecture with tight
code.

integration testing:testing of combined parts of an application to determine if they


function together correctly. The 'parts' can be code modules, individual
applications,client and server applications on a network, etc. This type of testing is
especially relevant to client/server and distributed systems.

52. What is the Diff between Two Tier & Three tier Architecture?
A. Two Tier Architecture:It is nothing but client server Architecture,where client will
hit request directly to server and client will get response directly from server.

Three tier Architecture:It is nothing but Web Based application,here in between client
and server middle ware will be there,if client hits a request it will go to the middle
ware and middle ware will send to server and vise-versa.

54. What is the diff between Integration & System Testing?


A. integration testing:testing of combined parts of an application to determine if they
function together correctly. The 'parts' can be code modules, individual
applications,client and server applications on a network, etc. This type of testing is
especially relevant to client/server and distributed systems.
System Testing:system testing will conducted on the entire system to check whether
it is meeting the customer requirements r not.

57. What is the Diff between SIT & IST?

58. What is the Diff between static and dynamic?


A. Static Testing:Test activities that are performed without running the software is
called Static Testing,it includes inspections,walk throughs and desk checks.

dynamic testing:Test activities that are performed by running the software is called
dynamic Testing.

71 basic SQA / testing interview questions

1. What are the differences between interface and integration testing? Are system
specification and functional specification the same? What are the differences
between system and functional testing?
2. What is Multi Unit testing?
3. What are the different types, methodologies, approaches, methods in software
testing
4. What is the difference between test techniques and test methodology?
5. What is meant by test environment,… what is meant by DB installing and
configuring and deploying skills?
6. What is logsheet? And what are the components in it?
7. What is Red Box testing? What is Yellow Box testing? What is Grey Box testing?
8. What is business process in software testing?
9. What is the difference between Desktop application testing and Web testing?
10. Find the values of each of the alphabets. N O O N S O O N + M O O N J YOU N
E
11. With multiple testers how does one know which test cases are assigned to them? •
Folder structure • Test process
12. What is difference between a Test Plan, a Test Strategy, A Test Scenario, and A
Test Case? What’s is their order of succession in the STLC?
13. How many functional testing tools are available? What is the easiest scripting
language used?
14. Which phase is called as the Blackout or Quite Phase in SDLC?
15. When an application is given for testing, with what initial testing the testing will
be started and when are all the different types of testing done following the initial
testing?
16.
17. Who are the three stake holders in testing?
18. What is meant by bucket testing?
19. What is test case analysis?
20. The recruiter asked if I have Experience in Pathways. What is this?
21. What are the main things we have to keep in mind while writing the test cases?
Explain with format by giving an example
22. How we can write functional and integration test cases? Explain with format by
giving examples.
23. Explain the water fall model and V- model of software development life cycles
with block diagrams.
24. For notepad application can any one write the functional and system test cases?
25. What is installation shield in testing
26. What is one key element of the test case?
27. What are the management tools we have in testing?
28. Can we write Functional test case based on only BRD or only Use case?
29. What’s main difference between smoke and sanity testing? When are these
performed?
30. What Technical Environments have you worked with?
31. Have you ever converted Test Scenarios into Test Cases?
32. What is the ONE key element of ‘test case’?
33. What is the ONE key element of a Test Plan?
34. What is SQA testing? tell us steps of SQA testing
35. Which Methodology you follow in your test case?
36. What are the test cases prepared by the testing team
37. During the start of the project how will the company come to an conclusion that
tool is required for testing or not?
38. What is a Test procedure?
39. What is the difference between SYSTEM testing and END-TO-END testing?
40. What is the difference between an exception and an error?
41. How much time is/should be allocated for testing out of total Development time
based on industry standards?
42. Define Quality - bug free, Functionality working or both?
43. What is the major difference between Web services & client server environment?
44. Is there any tool to calculate how much time should be allocated for testing out of
total development?
45. What is Scalability testing? Which tool is used?
46.
47. What is scalability testing? What are the phases of the scalability testing?
48. What kind of things does one need to know before starting an automation project?
49. What is difference between a Test Plan, a Test Strategy, A Test Scenario, and A
Test Case? What’s is their order of succession in the STLC?
50. How many functional testing tools are available? What is the easiest scripting
language used?
51. Project is completed. Completed means that UAT testing is going. In that
situation as a tester what will you do?

1. I-soft
What should be done after writing test case??
3. Define the components present in test strategy
4. Define the components present in test plan
5. Define database testing ?
6. What are different types of test case that u have written in your project..
9. Have u written Test plan ?….

2. Testing process followed in your company …


3. Testing Methodology
4. Where u maintains the Repositories?
5. What is CVS?

8. How will you validate the functionality of the Test cases, if there is no business
requirement document or user requirement document as such…
9. Testing process followed in your company?
10. Tell me about CMM LEVEL -4 …what are steps that to be followed to achieve the
CMM -IV standards?
11. What is Back End testing?
13. How will u write test cases for an given scenario…i.e. main page, login screen,
transaction, Report Verification?
15. What is CVS and why it is used?

17. What is Test summary Report…?


18. What is Test Closure report…?
20. What will be specified in the Test Case…
21. What are the Testing methodologies that u have followed in your project ?
22. What kind of testing that u have been involved in and explain about it….
23. What is UAT Testing?
24. What is joins and what are the different types of joins in SQL and explain the same?
25. What is Foreign Key in SQL…?

2. Explain about the Project. …And draw the architecture of your project?
3. What are the different types of severity?
5. what are the responsibilities of an tester?
6. Give some example how will you write the test cases if an scenario involves Login
screen.

1. What are the different types of testing followed …..


2. What are the different levels of testing used during testing the application?
4. What type of testing will be done in Installation testing or system testing?
5. What is meant by CMMI …what are different types of CMM Level?
6. Explain about the components involved in CMM-4 level

11. How will u ensure that you have covered all the functionality while writing test cases
if there is no functional spec and there is no KT about the application?
17. Install/uninstall testing: testing of full, partial, or upgrade install/uninstall processes.
21. Exploratory testing: often taken to mean a creative, informal software test that is not
based on formal test plans of test cases; testers may be learning the software as they test
it.
22. Ad-hoc testing: similar to exploratory testing, but often taken to mean that the testers
have significant understanding of the software testing it.
24. Comparison testing: comparing software weakness and strengths to competing
products.
1) What is SCORM?
2) What is Sec 508?
3) Have u done any portal testing?
4) DO u have any idea about LMS or LCMS?
5) Have u done any compliance testing
6) Have u done any compatibility testing?
7) What are the critical issues found while testing the projects in your organization?

Tell me about the testing procedures used by u in your organization?


9) How do you test a flash file?
10) Have u find any difference while testing a flash file and Html file?
11) What types of testing do u aware of?
12) While doing the compatibility testing have u found any critical issues?
13) While doing the compliance testing has u noticed any critical/ abnormal issues?
15) Have u done any performance or stress testing in your testing? If yes have u used any
automation techniques in that or not?

17) Tell me about the testing scenario’s used in project?


18) Have u written any test cases/test plan? If yes can u tell me one or two instances in
that?
19) Have u aware of any Usability and Acceptance testing?
20) Is ur testing is conventional or non-conventional?
21) Have u done any other lang’s courses testing? If yes have u faced in any critical
situations?
22) What are things to be more concentrated while testing same projects on different
environments?
23) What are AICC standards?

60. What is Impact analysis? How to do impact analysis in yr project?

A: — Impact analysis means when we r doing regressing testing at that time we r


checking that the bug fixes r working properly, and by fixing these bug other components
are working as per their requirements r they got disturbed.

61. HOW TO TEST A WEBSITE BY MANUAL TESTING?


A: — Web Testing
During testing the websites the following scenarios should be considered.
Functionality
Performance
Usability
Server side interface
Client side compatibility
Security

Functionality:
In testing the functionality of the web sites the following should be tested.
Links
Internal links
External links
Mail links
Broken links
Forms
Field validation
Functional chart
Error message for wrong input
Optional and mandatory fields
Database
Testing will be done on the database integrity.
Cookies
Testing will be done on the client system side, on the temporary internet files.

Performance:
Performance testing can be applied to understand the web site’s scalability, or to
benchmark the performance in the environment of third party products such as servers
and middle ware for potential purchase.

Connection speed:
Tested over various Networks like Dial up, ISDN etc

Load
What is the no. of users per time?
Check for peak loads & how system behaves.
Large amount of data accessed by user.

Stress
Continuous load
Performance of memory, cpu, file handling etc.

Usability :
Usability testing is the process by which the human-computer interaction characteristics
of a system are measured, and weaknesses are identified for correction. Usability can be
defined as the degree to which a given piece of software assists the person sitting at the
keyboard to accomplish a task, as opposed to becoming an additional impediment to such
accomplishment. The broad goal of usable systems is often assessed using several

Criteria:
Ease of learning
Navigation
Subjective user satisfaction
General appearance

Server side interface:


In web testing the server side interface should be tested.
This is done by Verify that communication is done properly.
Compatibility of server with software, hardware, network and database should be tested.
The client side compatibility is also tested in various platforms, using various browsers
etc.

Security:
The primary reason for testing the security of an web is to identify potential
vulnerabilities and subsequently repair them.
The following types of testing are described in this section:
Network Scanning
Vulnerability Scanning
Password Cracking
Log Review
Integrity Checkers
Virus Detection

Performance Testing
Performance testing is a rigorous usability evaluation of a working system under realistic
conditions to identify usability problems and to compare measures such as success
rate, task time and user satisfaction with requirements. The goal of performance testing is
not to find bugs, but to eliminate bottlenecks and establish a baseline for future regression
testing.

To conduct performance testing is to engage in a carefully controlled process of


measurement and analysis. Ideally, the software under test is already stable enough so
that this process can proceed smoothly. A clearly defined set of expectations is essential
for meaningful performance testing.
For example, for a Web application, you need to know at least two things:
expected load in terms of concurrent users or HTTP connections
acceptable response time

Compatability Testing
A Testing to ensure compatibility of an application or Web site with different browsers,
OS and hardware platforms. Different versions, configurations, display resolutions, and
Internet connect speeds all can impact the behavior of the product and introduce costly
and embarrassing bugs. We test for compatibility using real test environments. That is
testing how will the system performs in the particular software, hardware or network
environment. Compatibility testing can be performed manually or can be driven by an
automated functional or reg The purpose of compatibility testing is to reveal issues
related to the product& interaction session test suite.with other software as well as
hardware. The product compatibility is evaluated by first identifying the
hardware/software/browser components that the product is designed to support. Then a
hardware/software/browser matrix is designed that indicates the configurations on which
the product will be tested. Then, with input from the client, a testing script is designed
that will be sufficient to evaluate compatibility between the product and the
hardware/software/browser matrix. Finally, the script is executed against the matrix,and
any anomalies are investigated to determine exactly where the incompatibility lies.
Some typical compatibility tests include testing your application:
On various client hardware configurations
Using different memory sizes and hard drive space
On various Operating Systems
In different network environments
With different printers and peripherals (i.e. zip drives, USBs, etc.)

62. which comes first test strategy or test plan?

A:– Test strategy comes first ans this is the high level document…. and approach for the
testing starts from test strategy and then based on this the test lead prepares the
test plan….

63. what is the difference between web based application and client server application as
a testers point of view?

A:– According to Tester’s Point of view——


1) Web Base Application (WBA)is a 3 tier application ;Browser,Back end and Server.
Client server Application(CSA) is a 2 tier Application ;Front End ,Back end .
2) In the WBA tester test for the Script error like java script error VB script error etc, that
shown at the page. In the CSA tester does not test for any script error.
3) Because in the WBA once changes perform reflect at every machine so tester has less
work for test. Whereas in the CSA every time application need to be instal hence ,it
maybe possible that some machine has some problem for that Hardware testing as well as
software testing is needed.

63. What is the significance of doing Regression testing?

A:– To check for the bug fixes. And this fix should not disturb other functionality

To Ensure the newly added functionality or existing modified functionality or developer


fixed bug arises any new bug or affecting any other side effect. this is called regression
test and ensure already PASSED TEST CASES would not arise any new bug.
64. What are the diff ways to check a date field in a website?

A:– There are different ways like :–


1) you can check the field width for minimum and maximum.
2) If that field only take the Numeric Value then check it’ll only take Numeric no other
type.
3) If it takes the date or time then check for other.
4) Same way like Numeric you can check it for the Character,Alpha Numeric aand all.
5) And the most Important if you click and hit the enter key then some time pag e may
give the error of javascript, that is the big fault on the page .
6) Check the field for the Null value ..
ETC…………………

The date field we can check in different ways Possitive testing: first we enter the date in
given format

47. If project wants to release in 3months what type of Risk analysis u do in Test plan?

A:– Use risk analysis to determine where testing should be focused. Since it’s rarely
possible to test every possible aspect of an application, every possible combination of
events, every dependency, or everything that could go wrong, risk analysis is appropriate
to most software development projects. This requires judgment skills, common sense, and
experience. (If warranted, formal methods are also available.) Considerations can
include:

48. Test cases for IE 6.0 ?

A:– Test cases for IE 6.0 i.e Internet Explorer 6.0:—


1)First I go for the Installation side, means that –
+ is it working with all versions of Windows ,Netscape or other softwares in other words
we can say that IE must check with all hardware and software parts.
2) Secondly go for the Text Part means that all the Text part appears in frequent and
smooth manner.
3) Thirdly go for the Images Part means that all the Images appears in frequent and
smooth manner.
4) URL must run in a better way.
5) Suppose Some other language used on it then URL take the Other Characters, Other
than Normal Characters.
6)Is it working with Cookies frequently or not.
7) Is it Concerning with different script like JScript and VBScript.

HTML Code work on that or not.


9) Troubleshooting works or not.
10) All the Tool bars are work with it or not.
11) If Page has Some Links, than how much is the Max and Min Limit for that.
12) Test for Installing Internet Explorer 6 with Norton Protected Recycle Bin enabled .
13) Is it working with the Uninstallation Process.
14) Last but not the least test for the Security System for the IE 6.0

49. Where you involve in testing life cycle ,what type of test you perform ?

A:– Generally test engineers involved from entire test life cycle i.e, test plan, test case
preparation, execution, reporting. Generally system testing, regression testing, adhoc
testing
etc.

50. what is Testing environment in your company ,means hwo testing process start ?

A:– testing process is going as follows


quality assurance unit
quality assurance manager
testlead
test engineer

51. who prepares the use cases?

A:– In Any company except the small company Business analyst prepares the use cases
But in small company Business analyst prepares along with team lead

52. What methodologies have you used to develop test cases?

A:– generally test engineers uses 4 types of methodologies


1. Boundary value analysis
2.Equivalence partition
3.Error guessing
4.cause effect graphing

55. what is the exact difference between a product and a project.give an example ?

A:– Project Developed for particular client requirements are defined by client Product
developed for market Requirements are defined by company itself by conducting market
survey
Example
Project: the shirt which we are interested stitching with tailor as per our specifications is
project
Product: Example is “Ready made Shirt” where the particular company will imagine
particular measurements they made the product
Mainframes is a product
Product has many mo of versions
but project has fewer versions i.e depends upon change request and enhancements

59. what is the difference between three tier and two tier application?
A:– Client server is a 2-tier application. In this, front end or client is connected to
‘Data base server’ through ‘Data Source Name’,front end is the monitoring level.

Web based architecture is a 3-tier application. In this, browser is connected to web server
through TCP/IP and web server is connected to Data base server,browser is the
monitoring level. In general, Black box testers are concentrating on monitoring level of
any type of application.

All the client server applications are 2 tier architectures.


Here in these architecture, all the “Business Logic” is stored in clients and “Data” is
stored in Servers. So if user request anything, business logic will b performed at client,
and the data is retrieved from Server(DB Server). Here the problem is, if any business
logic changes, then we
need to change the logic at each any every client. The best ex: is take a super market, i
have branches in the city. At each branch i have clients, so business logic is stored in
clients, but the actual data is store in servers.If assume i want to give some discount on
some items, so i
need to change the business logic. For this i need to goto each branch and need to change
the business logic at each client. This the disadvantage of Client/Server architecture.

So 3-tier architecture came into picture:

Here Business Logic is stored in one Server, and all the clients are dumb terminals. If
user requests anything the request first sent to server, the server will bring the data from
DB Sever and send it to clients. This is the flow for 3-tier architecture.

Assume for the above. Ex. if i want to give some discount, all my business logic is there
in Server. So i need to change at one place, not at each client. This is the main advantage
of 3-tier architecture.

1. What is SQA Activities?


2. How can we perform testing without expected results?
4. How do you conduct boundary analyst testing for “ok”pushbutton
5. What is an exit and entry criteria in a Test Plan ?
6. To whom you send test deliverables?
8. Who writes the Business requirements? What you do when you have the BRD?
9. What we normally check for in the Database Testing?
11. What are the key elements for creating test plan?
12. How do you ensure the quality of the product?
13. What is the job of Quality assurance engineer? Difference between the testing &
Quality Assurance job.
16. How you used white box and block box technologies in your application?
17. What is the role of QA in a project development?
18. How can u test the white page ?
19. How do you scope, organize, and execute a test project?
20. What is the role of QA in a company that produces software?
23. How do you decide when you have ‘tested enough?’

27. If the actual result doesn’t match with expected result in this situation what should we
do?
28. 29. What is the difference between functional testing & black box testing?
30. What is heuristic checklist used in Unit Testing?
31. What is the difference between System Testing,Integration Testing & System
Integration Testing?
32. How to calculate the estimate for test case design and review?
34. What are the contents of Risk management Plan? Have you ever prepared a Risk
Management Plan ?
35. 31. If we have no SRS, BRS but we have test cases does u execute the test cases
blindly or do u follow any other process?
A: — Test case would have detail steps of what the application is supposed to do. SO
1) Functionality of application is known.

2) In addition you can refer to Backend, I mean look into the Database. To gain more
knowledge of the application

32. How to execute test case?


A: — There are two ways:
1. Manual Runner Tool for manual execution and updating of test status.
2. Automated test case execution by specifying Host name and other automation
pertaining details.

36. wht is smoke testing and user interface testing ?

A: — ST:
Smoke testing is non-exhaustive software testing, as pertaining that the most crucial
functions of a program work, but not bothering with finer details. The term comes to
software testing from a similarly basic type of hardware testing.

UIT:
I did a bit or R n D on this…. some says it’s nothing but Usability testing. Testing to
determine the ease with which a user can learn to operate, input, and interpret outputs of a
system or component.

Smoke testing is nothing but to check whether basic functionality of the build is stable or
not?
I.e. if it possesses 70% of the functionality we say build is stable.
User interface testing: We check all the fields whether they are existing or not as per the
format we check spelling graphic font sizes everything in the window present or not|
38. What is the diff b/w functional testing and integration testing?
A: — functional testing is testing the whole functionality of the system or the application
whether it is meeting the functional specifications

Integration testing means testing the functionality of integrated module when two
individual modules are integrated for this we use top-down approach and bottom up
approach

39. what type of testing u perform in organization while u do System Testing, give
clearly?

A: — Functional testing
User interface testing
Usability testing
Compatibility testing
Model based testing
Error exit testing
User help testing
Security testing
Capacity testing
Performance testing
Sanity testing
Regression testing
Reliability testing
Recovery testing
Installation testing
Maintenance testing
Accessibility testing, including compliance with:
Americans with Disabilities Act of 1990
Section 508 Amendment to the Rehabilitation Act of 1973
Web Accessibility Initiative (WAI) of the World Wide Web
Consortium (W3C)

40.

41. How can u do the following 1) Usability testing 2) scalability Testing

A:–
UT:
Testing the ease with which users can learn and use a product.

ST:
It’s a Web Testing defn.allows web site capability improvement.
PT:
Testing to determine whether the system/software meets the specified portability
requirements.

42. What does u mean by Positive and Negative testing & what is the diff’s between
them. Can anyone explain with an example?

A: — Positive Testing: Testing the application functionality with valid inputs and
verifying that output is correct
Negative testing: Testing the application functionality with invalid inputs and verifying
the output.

Difference is nothing but how the application behaves when we enter some invalid inputs
suppose if it accepts invalid input the application
Functionality is wrong

Positive test: testing aimed to show that s/w work i.e. with valid inputs. This is also called
as “test to pass’
Negative testing: testing aimed at showing s/w doesn’t work. Which is also know as ‘test
to fail” BVA is the best example of -ve testing.

44. What is risk analysis, what type of risk analysis u did in u r project?

A: — Risk Analysis:
A systematic use of available information to determine how often specified events and
unspecified events may occur and the magnitude of their likely consequences

OR

procedure to identify threats & vulnerabilities, analyze them to ascertain the exposures,
and highlight how the impact can be eliminated or reduced

Types :

1.QUANTITATIVE RISK ANALYSIS


2.QUALITATIVE RISK ANALYSIS

21. What is test plan and explain its contents?


A: — Test plan is a document which contains the scope for testing the application and
what to be tested, when to be tested and who to test.

25. Scalability testing comes under in which tool?


A: — Scalability testing comes under performance testing. Load testing, scalability
testing both r same.
28. What is the difference between functional test cases and compatability testcases?
A: — There are no Test Cases for Compatibility Testing; in Compatibility Testing we are
Testing an application in different Hardware and software. If it is wrong plz let me know.

Mostly this will be done database testing.

• FUNCTIONAL TESTING. Validating an application or Web site conforms to its


specifications and correctly performs all its required functions. This entails a
series of tests which perform a feature by feature validation of behavior, using a
wide range of normal and erroneous input data. This can involve testing of the
product's user interface, APIs, database management, security, installation,
networking, etcF testing can be performed on an automated or manual basis using
black box or white box methodologies.

System testing - Entire system is tested as per the requirements. Black-box type testing
that is based on overall requirements specifications, covers all combined parts of a
system.

Comparison testing - Comparison of product strengths and weaknesses with previous


versions or other similar products.

Active Test
Introducing test data and analyzing the results. Contrast with "passive test" (below).

Age Test (aging)


Evaluating a system's ability to perform in the future. To perform these tests,
hardware and/or test data is modified to a future date.

Dirty Test
Same as "negative test."

Environment Test
A test of new software that determines whether all transactions flow properly between
input, output and storage devices. See environment test.

Fuzz Test
Testing for software bugs by feeding it randomly generated data. See fuzz testing.

Gray Box Test


Testing software with some knowledge of its internal code or logic. Contrast with
"white box test" and "black box test."

Passive Test
Monitoring the results of a running system without introducing any special test data.
Contrast with "active test" (above).
System Test
Overall testing in the lab and in the user environment. See alpha test and beta test.

Test Case
A set of test data, test programs and expected results. See test case.

Benchmark Testing: Tests that use representative sets of programs and data designed to
evaluate the performance of computer hardware and software in a given configuration.

Beta Testing: Testing of a rerelease of a software product conducted by customers.

Binary Portability Testing: Testing an executable application for portability across system
platforms and environments, usually for conformation to an ABI specification.

CAST: Computer Aided Software Testing.

Coding: The generation of source code.

Component: A minimal software item for which a separate specification is available.

Data Dictionary: A database that contains definitions of all data items defined during
analysis.

Data Flow Diagram: A modeling notation that represents a functional decomposition of a


system.

Emulator: A device, computer program, or system that accepts the same inputs and
produces the same outputs as a given system.

Equivalence Class: A portion of a component's input or output domains for which the
component's behaviour is assumed to be the same from the component's specification.

Equivalence Partitioning: A test case design technique for a component in which test
cases are designed to execute representatives from equivalence classes.

Functional Decomposition: A technique used during planning, analysis and design;


creates a functional hierarchy for the software.

Functional Specification: A document that describes in detail the characteristics of the


product with regard to its intended features.

Glass Box Testing: A synonym for White Box Testing.

Gray Box Testing: A combination of Black Box and White Box testing methodologies:
testing a piece of software against its specification but using some knowledge of its
internal workings.
High Order Tests: Black-box tests conducted once the software has been integrated.

Independent Test Group (ITG): A group of people whose primary responsibility is


software testing,

Inspection: A group review quality improvement process for written material. It consists
of two aspects; product (document itself) improvement and process improvement (of both
document production and inspection).

N+1 Testing: A variation of Regression Testing. Testing conducted with multiple cycles
in which errors found in test cycle N are resolved and the solution is retested in test cycle
N+1. The cycles are typically repeated until the solution reaches a steady state and there
are no errors. See also Regression Testing.

Quality Audit: A systematic and independent examination to determine whether quality


activities and related results comply with planned arrangements and whether these
arrangements are implemented effectively and are suitable to achieve objectives.

Quality Circle: A group of individuals with related interests that meet at regular intervals
to consider problems or other matters related to the quality of outputs of a process and to
the correction of problems or to the improvement of quality.

Quality Management: That aspect of the overall management function that determines
and implements the quality policy.

Quality Policy: The overall intentions and direction of an organization as regards quality
as formally expressed by top management.

Quality System: The organizational structure, responsibilities, procedures, processes, and


resources for implementing quality management.

Race Condition: A cause of concurrency problems. Multiple accesses to a shared


resource, at least one of which is a write, with no mechanism used by either to moderate
simultaneous access.

Release Candidate: A pre-release version, which contains the desired functionality of the
final version, but which needs to be tested for bugs (which ideally should be removed
before the final version is released).

<>Software Requirements Specification: A deliverable that describes all data, functional


and behavioral requirements, all constraints, and all validation requirements for software/

<>Software Testing: A set of activities conducted with the intent of finding errors in
software.
Test Driven Development: Testing methodology associated with Agile Programming in
which every chunk of code is covered by unit tests, which must all pass all the time, in an
effort to eliminate unit-level and regression bugs during development. Practitioners of
TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the
production code.

Test Scenario: Definition of a set of test cases or test scripts and the sequence in which
they are to be executed.

Test Specification: A document specifying the test approach for a software feature or
combination or features and the inputs, predicted results and execution conditions for the
associated tests.

Test Tools: Computer programs used in the testing of a system, a component of the
system, or its documentation.

Total Quality Management: A company commitment to develop a process that achieves


high quality product and customer satisfaction.

Usability Testing: Testing the ease with which users can learn and use a product.

Workflow Testing: Scripted end-to-end testing which duplicates specific workflows


which are expected to be utilized by the end-user.

47. If project wants to release in 3months what type of Risk analysis u do in Test
plan?

A:– Use risk analysis to determine where testing should be focused. Since it’s rarely
possible to test every possible aspect of an application, every possible combination of
events, every dependency, or everything that could go wrong, risk analysis is appropriate
to most software development projects. This requires judgment skills, common sense, and
experience. (If warranted, formal methods are also available.) Considerations can
include:

48. Test cases for IE 6.0 ?

A:– Test cases for IE 6.0 i.e Internet Explorer 6.0:—


1)First I go for the Installation side, means that –
+ is it working with all versions of Windows ,Netscape or other softwares in other words
we can say that IE must check with all hardware and software parts.
2) Secondly go for the Text Part means that all the Text part appears in frequent and
smooth manner.
3) Thirdly go for the Images Part means that all the Images appears in frequent and
smooth manner.
4) URL must run in a better way.
5) Suppose Some other language used on it then URL take the Other Characters, Other
than Normal Characters.
6)Is it working with Cookies frequently or not.
7) Is it Concerning with different script like JScript and VBScript.

HTML Code work on that or not.


9) Troubleshooting works or not.
10) All the Tool bars are work with it or not.
11) If Page has Some Links, than how much is the Max and Min Limit for that.
12) Test for Installing Internet Explorer 6 with Norton Protected Recycle Bin enabled .
13) Is it working with the Uninstallation Process.
14) Last but not the least test for the Security System for the IE 6.0

49. Where you involve in testing life cycle ,what type of test you perform ?

A:– Generally test engineers involved from entire test life cycle i.e, test plan, test case
preparation, execution, reporting. Generally system testing, regression testing, adhoc
testing
etc.

50. what is Testing environment in your company ,means hwo testing process start ?

A:– testing process is going as follows


quality assurance unit
quality assurance manager
testlead
test engineer

52. What methodologies have you used to develop test cases?

A:– generally test engineers uses 4 types of methodologies


1. Boundary value analysis
2.Equivalence partition
3.Error guessing
4.cause effect graphing
What is the maximum length of the test case we can write?
We can't say exactly test case length, it depending on functionality.
Password is having 6 digit alphanumeric then what r the possible input conditions?
Including special characters also

Possible input conditions are:

1) Input password as = 6abcde (ie number first)


2) Input password as = abcde8 (ie character first)
3) Input password as = 123456 (all numbers)
4) Input password as = abcdef (all characters)
5) Input password less than 6 digit
6) Input password greater than 6 digits
7) Input password as special characters
8) Input password in CAPITAL ie uppercase
9) Input password including space
10) (SPACE) followed by alphabets /numerical /alphanumerical/

If I give some thousand tests to execute in 2 days what do u do?


If possible, we will automate or else, execute only the test cases which are mandatory.

What does black-box testing mean at the unit, integration, and system levels?

Tests for each software requirement using


Equivalence Class Partitioning, Boundary Value Testing, and more
Test cases for system software requirements using the Trace Matrix, Cross-functional
Testing, Decision
Tables, and more
Test cases for system integration for configurations, manual operations, etc.

White-Box: Structure-Based Testing

What is white-box testing?


Levels of coverage — statement, branch, path
What does white-box testing mean at the unit, integration, and system levels?
Understanding control flow and Cyclomatic Complexity
McCabe’s Design Predicate approach for choosing tests

Using the Test Design Process


Documenting the test design
· How much is enough

Regression and test design


· How can test design aid in making critical regression decisions?
If we have no SRS, BRS but we have test cases does u execute the test cases blindly
or do u follow any other process.

Test case would have detail steps of what the application is supposed to do. SO
1) Functionality of application is known.

2) In addition you can refer to Backend, is mean look into the Database. To gain more
knowledge of the application.

Default port number of Tomcat?


8080 is the default port number of Tomcat

Smoke test? Do you use any automation tool for smoke testing??
Testing the application whether it’s performing its basic functionality properly or not, so
that the test team can go ahead with the application. Definitely can use.

Testing methodology?
Varies from company to company (refer to symphony and emphasis websites for
different methodologies)

Explain some SDLC models?


V model, Waterfall model etc...

What are all the contents of release checklist?

When a new build comes what is 1st action? (Performing smoke test).

How many test cases will you write for 1 day?


It varies with the complexity of requirements. Some write 1 or 2/day. Some write up to
20/day

What is Testing environment in your company, means how testing process start

Testing process is going as follows:


Quality assurance unit
Quality assurance manager
Test lead
Test engineer

Diff. between STLC and SDLC?


STLC is software test life cycle it starts with
• Preparing the test strategy.
• Preparing the test plan.
• Creating the test environment.
• Writing the test cases.
• Creating test scripts.
• Executing the test scripts.
• Analyzing the results and reporting the bugs.
• Doing regression testing.
• Test exiting.
SDLC is software or system development life cycle, phases are...
• Project initiation.
• Requirement gathering and documenting.
• Designing.
• Coding and unit testing.
• Integration testing.
• System testing.
• Installation and acceptance testing.
• Support or maintenance.

SCM and SQA will follow throughout the cycle.

Transaction Per second?


TPS: a metric used to measure database performance.

How u r breaking down the project among team members?


It can be depend on these following cases----
1) Number of modules
2) Number of team members
3) Complexity of the Project
4) Time Duration of the project
5) Team member's experience etc......

What is Test Data Collection?


Test data is the collection of input data taken for testing the application. Various types
and size of input data will be taken for testing the applications. Sometimes in critical
application the test data collection will be given by the client also.

What is Test Server?


Test Server is nothing but the place where the developers put their development modules,
which are accessed by the testers to test the functionality (Soft Base).

What are non-functional requirements?


The non-functional requirements of a software product are: reliability, usability,
efficiency, delivery time, software development environment, security requirements,
standards to be followed etc.
Why we perform stress-testing, resolution-testing and cross- browser testing?
There are two sand clocks(timers) one complete totally in 7 minutes and other in 9-
minutes we have to calculate with this timers and bang the bell after completion of 11-
minutes!plz give me the solution.
1. Start both clocks
2. When 7 min clock complete, turn it so that it restarts.
3. When 9 min clock finish, turn 7 min clocks (It has 2 mints only).
4. When 7 min clock finishes, 11 min complete.

What is the minimum criteria for white box?


We should know the logic, code and the structure of the program or function. Internal
knowledge of the application how the system works what’s the logic behind it and
structure how it should react to particular action.
What are the technical reviews and reviews?
For each document, it should be reviewed. Technical Review in the sense, for each
screen, developer will write a Technical Specification. It should be reviewed by
developer and tester. There are functional specification review, unit test case review and
code review etc.
How to answer when the interviewer asked "what is your project architecture?" pls.
tell me in general...
It is a bit critical question but the answer is very simple.....he want to know the Flow of
your project in which he tries to explore the HLD, DLD, LLD so u make a overview of
the project and tell him how is the flow of data from one module to another module and
what are the submodulepresent into it and how the data is given to the sub modules

In what basis you will write test cases?


I would write the Test cases based on Functional Specifications and BRDs and some
more test cases using the Domain knowledge.

What are the main key components in Web applications and client and Server
applications? (differences)

For Web Applications: Web application can be implemented using any kind of
technology like Java, .NET, VB, ASP, CGI& PERL. Based on the technology,We can
derive the components.

Let's take Java Web Application. It can be implemented in 3 tier architecture.


Presentation tier (jsp, html, dthml,servlets, struts). Busienss Tier (Java Beans, EJB, JMS)
Data Tier(Databases like Oracle, SQL Server etc., )

If you take .NET Application, Presentation (ASP, HTML, DHTML), Business Tier
(DLL) & Data Tier ( Database like Oracle, SQL Server etc.,)
Client Server Applications: It will have only 2 tiers. One is Presentation (Java, Swing)
and Data Tier (Oracle, SQL Server). If it is client Server architecture, the entire
application has to be installed on the client machine. When ever you do any changes in
your code, Again, It has to be installed on all the client machines. Where as in Web
Applications, Core Application will reside on the server and client can be thin
Client(browser). Whatever the changes you do, you have to install the application in the
server. NO need to worry about the clients. Because, You will not install any thing on the
client machine.

What is the formal technical review?


Technical review should be done by the team of members. The document, which is going
to be reviewed, who has prepared and reviewers should sit together and do the review of
that
document. It is called Peer Review. If it is a technical document, It can be called as
formal Technical review, I guess. It varies depends on the company policy.

At what phase tester role starts?


In SDLC after complition of FRS document the test lead prepare the use case document
and test plan document, then the tester role is start.

Actually how many positive and negetive testcases will write for a module?
That depends on the module & complexity of logic. For every test case, we can identify
+ve and -ve points. Based on the criteria, we will write the test cases, If it is crucial
process or screen. We should check the screen,in all the boundary conditions.

What is difference between Access(DBMS) and RDBMS like SQL Server or Oracle?.
Why Access is not used in web based application?
difference is nothing but in access we dont have relations to carry database we dont have
normalization,joinsbut in oracle we have normalized data or relations

What is Six sigma? Explain.

Six Sigma:
A quality discipline that focuses on product and service excellence to create a culture that
demands perfection on target, every time.

Six Sigma quality levels


Produces 99.9997% accuracy, with only 3.4 defects per million opportunities.

Six Sigma is designed to dramatically upgrade a company’s performance, improving


quality and productivity. Using existing products, processes, and service standards,
They go for Six Sigma MAIC methodology to upgrade performance.

MAIC is defined as follows:


Measure: Gather the right data to accurately assess a problem.
Analyze: Use statistical tools to correctly identify the root causes of a problem.
Improve: Correct the problem (not the symptom).
Control: Put a plan in place to make sure problems stay fixed and sustain the gains.

Key Roles and Responsibilities:

The key roles in all Six Sigma efforts are as follows:

Sponsor: Business executive leading the organization.


Champion: Responsible for Six Sigma strategy, deployment, and vision.
Process Owner: Owner of the process, product, or service being improved responsible
for long-term sustainable gains.
Master Black Belts: Coach black belts expert in all statistical tools.
Black Belts: Work on 3 to 5 $250,000-per-year projects; create $1 million per year in
value.
Green Belts: Work with black belt on projects.

What are cookies? Tell me the advantage and disadvantage of cookies?


Cookies are messages that web servers pass to your web browser when you visit Internet
sites. Your browser stores each message in a small file. When you request another page
from the server, your browser sends the cookie back to the server. These files typically
contain information about your visit to the web page, as well as any information you've
volunteered, such as your name and interests. Cookies are most commonly used to track
web site activity. When you visit some sites, the server gives you a cookie that acts as
your identification card. Upon each return visit to that site, your browser passes that
cookie back to the server. In this way, a web server can gather information about which
web pages are used the most, and which pages are gathering the most repeat hits. Only
the web site that creates the cookie can read it. Additionally, web servers can only use
information that you provide or choices that you make while visiting the web site as
content in cookies. Accepting a cookie does not give a server access to your computer or
any of your personal information. Servers can only read cookies that they have set, so
other servers do not have access to your information. Also, it is not possible to execute
code from a cookie, and not possible to use a cookie to deliver a virus.

If I give some thousand tests to execute in 2 days what do u do?


If possible, we will automate or else, execute only the test cases which are mandatory.

1. What if there isn't enough time for thorough testing?


Answer: First try to pull more resources from other project. If not possible use risk
analysis to determine where testing should be focused.
2. How can World Wide Web sites be tested?
Answer: Web sites are essentially client/server applications - with web servers and
'browser' clients. Consideration should be given to the interactions between html pages,
TCP/IP communications, Internet connections, firewalls, applications that run in web
pages (such as applets, javascript, plug-in applications), and applications that run on the
server side (such as cgi scripts, database interfaces, logging applications, dynamic page
generators, asp, etc.). Additionally, there are a wide variety of servers and browsers,
various versions of each, small but sometimes significant differences between them,
variations in connection speeds, rapidly changing technologies, and multiple standards
and protocols. The end result is that testing for web sites can become a major ongoing
effort. Other considerations might include:
• What are the expected loads on the server (e.g., number of hits per unit
time?), and what kind of performance is required under such loads (such
as web server response time, database query response times). What kinds
of tools will be needed for performance testing (such as web load testing
tools, other tools already in house that can be adapted, web robot
downloading tools, etc.)?
• Pages should be 3-5 screens max unless content is tightly focused on a
single topic. If larger, provide internal links within the page.
• The page layouts and design elements should be consistent throughout a
site, so that it's clear to the user that they're still within a site.
• Pages should be as browser-independent as possible, or pages should be
provided or generated based on the browser-type.
• All pages should have links external to the page; there should be no dead-
end pages.
3. What is the Test we are going to do when a New Build is come to test environment?
Answer: Generic Sequence: 1. Run Smoke Test Plan, 2. Execute Functional Test Plan
(If planned), 3. Verify Bug Fixes, 4. Execute Regression Test Plan (applicable to
build under test, if planned)
Based on above executions decide (probably test lead/QA manager) whether this
build can be accepted
4. How should you learn about problems discovered in the field, and what should you
learn from those problems?
Answer: learn about the scenario in which the problem occurred. check your test
plans/test design/test cases for the scenarion1. You donot have that scenario. Its an
escape. Analyze the scenario where you slipped and make amendments in future
releases /versions2. You have it. Did not cover because of lack of time or resources
[hardware /software/others] ...Not your fault. Report to Manager he will take care of
it.
5. How do you scope, organize, and execute a test project?
Answer: test plan will have everything.

What goes into a test package?

What test data would you need to test that a specific date occurs on a specific day of
week?

What would you do if management pressure is stating that testing is complete and
you feel differently?
Who should be involved in each level of testing? What should be their
responsibilities?

You have more verifiable QA experience testing:

What is an advantage of black box testing over white box testing:

Tests give confidence that the program meets its specifications


Tests can be done while the program is being written instead of waiting until it is finished
It insures that every piece of code written is tested in some way
Tests give confidence that every part of the code is working

Your experience with Programming within the context of Quality Assurance is:
N/A - I have no programming experience in C, C++ or Java.
You have done some programming in my role as a QA Engineer, and am comfortable
meeting such requirements in Java, C and C++ or VC++.
You have developed applications of moderate complexity that have taken up to three
months to complete.
Why does testing not prove a program is 100 percent correct (except for extremely
simple programs)?

Because we can only test a finite number of cases, but the program may have an infinite
number of possible combinations of inputs and outputs
Because the people who test the program are not the people who write the code
Because the program is too long
All of the above
We CAN prove a program is 100 percent correct by testing

Which of the following testing strategies ignores the internal structure of the
software?

Interface testing
Top down testing
White box testing
Black box testing
Sandwich testing

You are the test manager starting on system testing. The development team says
that due to a change in the requirements, they will be able to deliver the system for
SQA 5 days past the deadline. You cannot change the resources (work hours, days,
or test tools). What steps will you take to be able to finish the testing in time?
Your company is about to roll out an e-commerce application. It’s not possible to
test the application on all types of browsers on all platforms and operating systems.
What steps would you take in the testing environment to reduce the business risks
and commercial risks?

In your organization, testers are delivering code for system testing without
performing unit testing.

Give an example of
test policy:
Policy statement
Methodology
Measurement

Testers in your organization are performing tests on the deliverables even after
significant defects have been found. This has resulted in unnecessary testing of little
value, because re-testing needs to be done after defects have been rectified. You are
going to update the test plan with recommendations on when to halt testing. What
recommendations are you going to make?

How do you measure:


Test Effectiveness
Test Efficiency

How do you test if you have minimal or no documentation about the product?

Realising you won't be able to test everything - how do you decide what to test first?

Ho to do test if we have minimal or no documentation about the product?

How do you handle conflict with programmers?

Can testability features be added to the product code?

Do testers and developers work cooperatively and with mutual respect?

Have you defined the requirements and success criteria for automation?

Why did you ever become involved in QA/testing?

What is the testing lifecycle and explain each of its phases?

What are two of your strengths that you will bring to our QA/testing team?

What do you like most about Quality Assurance/Testing?


What is your experience with change control? Our development team has only 10
members. Do you think managing change is such a big deal for us?

Can you build a good audit trail using Compuware's QACenter products. Explain why.

How important is Change Management in today's computing environments?

Do you think tools are required for managing change. Explain and please list some
tools/practices which can help you managing change.

We believe in ad-hoc software processes for projects. Do you agree with this? Please
explain your answer.

When is a good time for system testing?

Our software designers use UML for modeling applications. Based on their use cases, we
would like to plan a test strategy. Do you agree with this approach or would this mean
more effort for the testers.

How can one file compare future dated output files from a program which has change,
against the baseline run which used current date for input. The client does not want to
mask dates on the output files to allow compares. - Answer-Rerun baseline and future
date input files same # of days as future dated run of program with change. Now run a
file compare against the baseline future dated output and the changed programs' future
dated output.

What criteria would you use to select Web transactions for load testing?

What are the reasons why parameterization is necessary when load testing the Web server
and the database server?

How can data caching have a negative effect on load testing results?
What usually indicates that your virtual user script has dynamic data that is dependent on
you parameterized fields?

What are the benefits of creating multiple actions within any virtual user script?

The top management was feeling that when there are any changes in the technology being
used, development schedules etc, it was a waste of time to update the Test Plan. Instead,
they were emphasizing that you should put your time into testing than working on the test
plan. Your Project Manager asked for your opinion. You have argued that Test Plan is
very important and you need to update your test plan from time to time. It’s not a waste
of time and testing activities would be more effective when you have your plan clear. Use
some metrics. How you would support your argument to have the test plan consistently
updated all the time.
The QAI is starting a project to put the CSTE certification online. They will use an
automated process for recording candidate information, scheduling candidates for exams,
keeping track of results and sending out certificates. Write a brief test plan for this new
project. The project had a very high cost of testing. After going in detail, someone found
out that the testers are spending their time on software that doesn’t have too many
defects. How will you make sure that this is correct?

What happens to the test plan if the application has a functionality not mentioned in the
requirements?

You are given two scenarios to test. Scenario 1 has only one terminal for entry and
processing whereas scenario 2 has several terminals where the data input can be made.
Assuming that the processing work is the same, what would be the specific tests that you
would perform in Scenario 2, which you would not carry on Scenario 1?
What is the need for Test Planning?

How can software QA processes be implemented without stifling productivity?

How is testing affected by object-oriented designs?


Write a test transaction for a scenario where 6.2% of tax deduction for the first $62,000
of income has to be done

What would be the Test Objective for Unit Testing? What are the quality measurements
to assure that unit testing is complete?

Prepare a checklist for the developers on Unit Testing before the application comes to
testing department.

Draw a pictorial diagram of a report you would create for developers to determine project
status.

Draw a pictorial diagram of a report you would create for users and management to
determine project status.

What 3 tools would you purchase for your company for use in testing? Justify the need?
Put the following concepts, put them in order, and provide a brief description of each:
system testing
acceptance testing
unit testing
integration testing
benefits realization testing

What are two primary goals of testing?


If your company is going to conduct a review meeting, who should be on the review
committe and why?

Write any three attributes which will impact the Testing Process?

You are a tester for testing a large system. The system data model is very large with
many attributes and there are a lot of inter-dependencies within the fields. What steps
would you use to test the system and also what are the effects of the steps you have taken
on the test plan?

Q10. How do you introduce a new software QA process?

A: It depends on the size of the organization and the risks involved. For large
organizations with high-risk projects, a serious management buy-in is required and a
formalized QA process is necessary. For medium size organizations with lower risk
projects, management and organizational buy-in and a slower, step-by-step process is
required. Generally speaking, QA processes should be balanced with productivity, in
order to keep any bureaucracy from getting out of hand. For smaller groups or
projects, an ad-hoc process is more appropriate. A lot depends on team leads and
managers, feedback to developers and good communication is essential among
customers, managers, developers, test engineers and testers. Regardless the size of the
company, the greatest value for effort is in managing requirement processes, where
the goal is requirements that are clear, complete and testable.

requirements?
A: Requirement specifications are important and one of the most reliable methods of
insuring problems in a complex software project is to have poorly documented
requirement specifications. Requirements are the details describing an application's
externally perceived functionality and properties. Requirements should be clear,
complete, reasonably detailed, cohesive, attainable and testable. A non-testable
requirement would be, for example, "user-friendly", which is too subjective. A testable
requirement would be something such as, "the product shall allow the user to enter their
previously-assigned password to access the application". Care should be taken to involve
all of a project's significant customers in the requirements process. Customers could be
in-house or external and could include end-users, customer acceptance test engineers,
testers, customer contract officers, customer management, future software maintenance
engineers, salespeople and anyone who could later derail the project. If his/her
expectations aren't met, they should be included as a customer, if possible. In some
organizations, requirements may end up in high-level project plans, functional
specification documents, design documents, or other documents at various levels of
detail. No matter what they are called, some type of documentation with detailed
requirements will be needed by test engineers in order to properly plan and execute tests.
Without such documentation there will be no clear-cut way to determine if a software
application is performing correctly. You CAN learn to capture requirements, with little or
no outside help. Get CAN get free information. Click on a link!
test case?
A: A test case is a document that describes an input, action, or event and its expected
result, in order to determine if a feature of an application is working correctly. A test case
should contain particulars such as ...
• Test case identifier;
• Test case name;
• Objective;
• Test conditions/setup;
• Input data requirements/steps, and
• Expected results.
• Please note, the process of developing test cases can help find problems in the
requirements or design of an application, since it requires you to completely think
through the operation of the application. For this reason, it is useful to prepare test
cases early in the development cycle, if possible.

• Q26. What if there isn't enough time for thorough testing?


A: Since it's rarely possible to test every possible aspect of an application, every
possible combination of events, every dependency, or everything that could go
wrong, risk analysis is appropriate to most software development projects. Use
risk analysis to determine where testing should be focused. This requires
judgment skills, common sense and experience. The checklist should include
answers to the following questions:
• Which functionality is most important to the project's intended purpose?
• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development cycle?
• Which parts of the code are most complex and thus most subject to errors?
• Which parts of the application were developed in rush or panic mode?
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance
expenses?
• Which parts of the requirements and design are unclear or poorly thought out?
• What do the developers think are the highest-risk aspects of the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required ratio?
Q29. What if the application has functionality that wasn't in the requirements?
A: It may take serious effort to determine if an application has significant unexpected or
hidden functionality, which it would indicate deeper problems in the software
development process. If the functionality isn't necessary to the purpose of the application,
it should be removed, as it may have unknown impacts or dependencies that were not
taken into account by the designer or the customer.
If not removed, design information will be needed to determine added testing needs or
regression testing needs. Management should be made aware of any significant added
risks as a result of the unexpected functionality. If the functionality only affects areas,
such as minor improvements in the user interface, it may not be a significant risk.
Q30. How can software QA processes be implemented without stifling productivity?
A: Implement QA processes slowly over time. Use consensus to reach agreement on
processes and adjust and experiment as an organization grows and matures. Productivity
will be improved instead of stifled. Problem prevention will lessen the need for problem
detection. Panics and burnout will decrease and there will be improved focus and less
wasted effort. At the same time, attempts should be made to keep processes simple and
efficient, minimize paperwork, promote computer-based processes and automated
tracking and reporting, minimize time required in meetings and promote training as part
of the QA process. However, no one, especially talented technical types, like bureaucracy
and in the short run things may slow down a bit. A typical scenario would be that more
days of planning and development will be needed, but less time will be required for late-
night bug fixing and calming of irate customers
Q31. What if organization is growing so fast that fixed QA processes are impossible?
A: This is a common problem in the software industry, especially in new technology
areas. There is no easy solution in this situation, other than...
• Hire good people (i.e. hire Rob Davis)
• Ruthlessly prioritize quality issues and maintain focus on the customer;
• Everyone in the organization should be clear on what quality means to the
customer.
Q32. How is testing affected by object-oriented designs?

A: A well-engineered object-oriented design can make it easier to trace from code to


internal design to functional design to requirements. While there will be little affect on
black box testing (where an understanding of the internal design of the application is
unnecessary), white-box testing can be oriented to the application's objects. If the
application was well designed this can simplify test design.

Q33. Why do you recommended that we test during the design phase?
A: Because testing during the design phase can prevent defects later on. We recommend
verifying three things...
1. Verify the design is good, efficient, compact, testable and maintainable.
1. Verify the design meets the requirements and is complete (specifies all
relationships between modules, how to pass data, what happens in exceptional
circumstances, starting state of each module and how to guarantee the state of
each module).
1. Verify the design incorporates enough memory, I/O devices and quick enough
runtime for the final product.

Q36. Process and procedures - why follow them?


A: Detailed and well-written processes and procedures ensure the correct steps are being
executed to facilitate a successful completion of a task. They also ensure a process is
repeatable. Once Rob Davis has learned and reviewed customer's business processes and
procedures, he will follow them. He will also recommend improvements and/or additions.

Q37. Standards and templates - what is supposed to be in a document?


A: All documents should be written to a certain standard and template. Standards and
templates maintain document uniformity. It also helps in learning where information is
located, making it easier for a user to find what they want. Lastly, with standards and
templates, information will not be accidentally omitted from a document. Once Rob
Davis has learned and reviewed your standards and templates, he will use them. He will
also recommend improvements and/or additions.
Q38. What are the different levels of testing?
A: Rob Davis has expertise in testing at all testing levels listed below. At each test level,
he documents the results. Each level of testing is either considered black or white box
testing.

parallel/audit testing?
A: Parallel/audit testing is testing where the user reconciles the output of the new system
to the output of the current system to verify the new system performs the operations
correctly.

comparison testing?
A: Comparison testing is testing that compares software weaknesses and strengths to
those of competitors' products.
Q61. What testing roles are standard on most testing projects?
A: Depending on the organization, the following roles are more or less standard on most
testing projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA Manager, System
Administrator, Database Administrator, Technical Analyst, Test Build Manager and Test
Configuration Manager. Depending on the project, one person may wear more than one
hat. For instance, Test Engineers may also wear the hat of Technical Analyst, Test Build
Manager and Test Configuration Manager. You CAN get a job in testing. Click on a link!

Q62. What is a Test/QA Team Lead?


A: The Test/QA Team Lead coordinates the testing activity, communicates testing status
to management and manages the test team.
Q63. What is a Test Engineer?
A: Test Engineers are engineers who specialize in testing. We, test engineers, create test
cases, procedures, scripts and generate data. We execute test procedures and scripts,
analyze standards of measurements, evaluate results of system/integration/regression
testing. We also...
• Speed up the work of the development staff;
• Reduce your organization's risk of legal liability;
• Give you the evidence that your software is correct and operates properly;
• Improve problem tracking and reporting;
• Maximize the value of your software;
• Maximize the value of the devices that use it;
• Assure the successful launch of your product by discovering bugs and design
flaws, before users get discouraged, before shareholders loose their cool and
before employees get bogged down;
• Help the work of your development staff, so the development team can devote its
time to build up your product;
• Promote continual improvement;
• Provide documentation required by FDA, FAA, other regulatory agencies and
your customers;
• Save money by discovering defects 'early' in the design process, before failures
occur in production, or in the field;
• Save the reputation of your company by discovering bugs and design flaws;
before bugs and design flaws damage the reputation of your company.
Test Build Manager?
A: Test Build Managers deliver current software versions to the test environment, install
the application's software and apply software patches, to both the application and the
operating system, set-up, maintain and back up test environment hardware. Depending on
the project, one person may wear more than one hat. For instance, a Test Engineer may
also wear the hat of a Test Build Manager.
System Administrator?
A: Test Build Managers, System Administrators, Database Administrators deliver current
software versions to the test environment, install the application's software and apply
software patches, to both the application and the operating system, set-up, maintain and
back up test environment hardware. Depending on the project, one person may wear
more than one hat. For instance, a Test Engineer may also wear the hat of a System
Administrator.
Database Administrator?
A: Test Build Managers, System Administrators and Database Administrators deliver
current software versions to the test environment, install the application's software and
apply software patches, to both the application and the operating system, set-up, maintain
and back up test environment hardware. Depending on the project, one person may wear
more than one hat. For instance, a Test Engineer may also wear the hat of a Database
Administrator.
Technical Analyst?
A: Technical Analysts perform test assessments and validate system/functional test
requirements. Depending on the project, one person may wear more than one hat. For
instance, Test Engineers may also wear the hat of a Technical Analyst.

Test Configuration Manager?


A: Test Configuration Managers maintain test environments, scripts, software and test
data. Depending on the project, one person may wear more than one hat. For instance,
Test Engineers may also wear the hat of a Test Configuration Manager.
test schedule?
A: The test schedule is a schedule that identifies all tasks required for a successful testing
effort, a schedule of all test activities and resource requirements.
software testing methodology?
A: One software testing methodology is the use a three step process of...
1. Creating a test strategy;
1. Creating a test plan/design; and
1. Executing tests.
This methodology can be used and molded to your organization's needs. Rob Davis
believes that using this methodology is important in the development and in ongoing
maintenance of his customers' applications.
general testing process?
A: The general testing process is the creation of a test strategy (which sometimes
includes the creation of test cases), creation of a test plan/design (which usually includes
test cases and test procedures) and the execution of tests.
create a test strategy?
A: The test strategy is a formal description of how a software product will be tested. A
test strategy is developed for all levels of testing, as required. The test team analyzes the
requirements, writes the test strategy and reviews the plan with the project team. The test
plan may include test cases, conditions, the test environment, a list of related tasks,
pass/fail criteria and risk assessment.
Inputs for this process:
• A description of the required hardware and software components, including test
tools. This information comes from the test environment, including test tool data.
• A description of roles and responsibilities of the resources required for the test
and schedule constraints. This information comes from man-hours and schedules.
• Testing methodology. This is based on known standards.
• Functional and technical requirements of the application. This information comes
from requirements, change request, technical and functional design documents.
• Requirements that the system can not provide, e.g. system limitations.
Outputs for this process:
• An approved and signed off test strategy document, test plan, including test cases.
• Testing issues requiring resolution. Usually this requires additional negotiation at
the project management level.
create a test plan/design?
A: Test scenarios and/or cases are prepared by reviewing functional requirements of the
release and preparing logical groups of functions that can be further broken into test
procedures. Test procedures define test conditions, data to be used for testing and
expected results, including database updates, file outputs, report results. Generally
speaking...
• Test cases and scenarios are designed to represent both typical and unusual
situations that may occur in the application.

• Test engineers define unit test requirements and unit test cases. Test engineers
also execute unit test cases.

• It is the test team that, with assistance of developers and clients, develops test
cases and scenarios for integration and system testing.

• Test scenarios are executed through the use of test procedures or scripts.

• Test procedures or scripts define a series of steps necessary to perform one or


more test scenarios.

• Test procedures or scripts include the specific data that will be used for testing the
process or transaction.

• Test procedures or scripts may cover multiple test scenarios.

• Test scripts are mapped back to the requirements and traceability matrices are
used to ensure each test is within scope.
• Test data is captured and base lined, prior to testing. This data serves as the
foundation for unit and system testing and used to exercise system functionality in
a controlled environment.

• Some output data is also base-lined for future comparison. Base-lined data is used
to support future application maintenance via regression testing.

• A pretest meeting is held to assess the readiness of the application and the
environment and data to be tested. A test readiness document is created to indicate
the status of the entrance criteria of the release.
Inputs for this process:
• Approved Test Strategy Document.
• Test tools, or automated test tools, if applicable.
• Previously developed scripts, if applicable.
• Test documentation problems uncovered as a result of testing.
• A good understanding of software complexity and module path coverage, derived
from general and detailed design documents, e.g. software design document,
source code and software complexity data.
Outputs for this process:
• Approved documents of test scenarios, test cases, test conditions and test data.
• Reports of software design issues, given to software developers for correction.

execute tests?
A: Execution of tests is completed by following the test documents in a methodical
manner. As each test procedure is performed, an entry is recorded in a test execution log
to note the execution of the procedure and whether or not the test procedure uncovered
any defects. Checkpoint meetings are held throughout the execution phase. Checkpoint
meetings are held daily, if required, to address and discuss testing issues, status and
activities.
• The output from the execution of test procedures is known as test results. Test
results are evaluated by test engineers to determine whether the expected results
have been obtained. All discrepancies/anomalies are logged and discussed with
the software team lead, hardware test lead, programmers, software engineers and
documented for further investigation and resolution. Every company has a
different process for logging and reporting bugs/defects uncovered during testing.
• A pass/fail criteria is used to determine the severity of a problem, and results are
recorded in a test summary report. The severity of a problem, found during
system testing, is defined in accordance to the customer's risk assessment and
recorded in their selected tracking tool.
• Proposed fixes are delivered to the testing environment, based on the severity of
the problem. Fixes are regression tested and flawless fixes are migrated to a new
baseline. Following completion of the test, members of the test team prepare a
summary report. The summary report is reviewed by the Project Manager,
Software QA Manager and/or Test Team Lead.
• After a particular level of testing has been certified, it is the responsibility of the
Configuration Manager to coordinate the migration of the release software
components to the next test level, as documented in the Configuration
Management Plan. The software is only migrated to the production environment
after the Project Manager's formal acceptance.
• The test team reviews test document problems identified during testing, and
update documents where appropriate.
Inputs for this process:
• Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.
• Test tools, including automated test tools, if applicable.
• Developed scripts.
• Changes to the design, i.e. Change Request Documents.
• Test data.
• Availability of the test team and project team.
• General and Detailed Design Documents, i.e. Requirements Document, Software
Design Document.
• A software that has been migrated to the test environment, i.e. unit tested code,
via the Configuration/Build Manager.
• Test Readiness Document.
• Document Updates.
Outputs for this process:
• Log and summary of the test results. Usually this is part of the Test Report. This
needs to be approved and signed-off with revised testing deliverables.
• Changes to the code, also known as test fixes.
• Test document problems uncovered as a result of testing. Examples are
Requirements document and Design Document problems.
• Reports on software design issues, given to software developers for correction.
Examples are bug reports on code issues.
• Formal record of test incidents, usually part of problem tracking.
• Base-lined package, also known as tested source and object code, ready for
migration to the next level.
testing approaches can you tell me about?
A: Each of the followings represents a different testing approach:
• Black box testing,
• White box testing,
• Unit testing,
• Incremental testing,
• Integration testing,
• Functional testing,
• System testing,
• End-to-end testing,
• Sanity testing,
• Regression testing,
• Acceptance testing,
• Load testing,
• Performance testing,
• Usability testing,
• Install/uninstall testing,
• Recovery testing,
• Security testing,
• Compatibility testing,
• Exploratory testing, ad-hoc testing,
• User acceptance testing,
• Comparison testing,
• Alpha testing,
• Beta testing, and
• Mutation testing.
stress testing?
A: Stress testing is testing that investigates the behavior of software (and hardware)
under extraordinary operating conditions. For example, when a web server is stress
tested, testing aims to find out how many users can be on-line, at the same time, without
crashing the server. Stress testing tests the stability of a given system or entity. It tests
something beyond its normal operational capacity, in order to observe any negative
results. For example, a web server is stress tested, using scripts, bots, and various denial
of service tools.
difference between reliability testing and load testing?
A: Load testing is a blanket term that is used in many different ways across the
professional software testing community. The term, load testing, is often used
synonymously with stress testing, performance testing, reliability testing, and volume
testing. Load testing generally stops short of stress testing. During stress testing, the load
is so great that errors are the expected results, though there is gray area in between stress
testing and load testing.
software testing?
A: Software testing is a process that identifies the correctness, completenes, and quality
of software. Actually, testing cannot establish the correctness of software. It can find
defects, but cannot prove there are no defects. You CAN learn software testing, with little
or no outside help. Get CAN get free information. Click on a link!

clear box testing?


A: Clear box testing is the same as white box testing. It is a testing approach that
examines the application's program structure, and derives test cases from the application's
program logic. You CAN learn clear box testing, with little or no outside help. Get CAN
get free information. Click on a link!
gamma testing?
A: Gamma testing is testing of software that has all the required features, but it did not go
through all the in-house quality checks. Cynics tend to refer to software releases as
"gamma testing".
glass box testing?
A: Glass box testing is the same as white box testing. It is a testing approach that
examines the application's program structure, and derives test cases from the application's
program logic.
open box testing?
A: Open box testing is same as white box testing. It is a testing approach that examines
the application's program structure, and derives test cases from the application's program
logic
closed box testing?
A: Closed box testing is same as black box testing. Black box testing a type of testing
that considers only externally visible behavior. Black box testing considers neither the
code itself, nor the "inner workings" of the software

How do test case templates look like?


A: Software test cases are in a document that describes inputs, actions, or events, and
their expected results, in order to determine if all features of an application are working
correctly. Test case templates contain all particulars of every test case. Often these
templates are in the form of a table. One example of this table is a 6-column table, where
column 1 is the "Test Case ID Number", column 2 is the "Test Case Name", column 3 is
the "Test Objective", column 4 is the "Test Conditions/Setup", column 5 is the "Input
Data Requirements/Steps", and column 6 is the "Expected Results". All documents
should be written to a certain standard and template. Standards and templates maintain
document uniformity. They also help in learning where information is located, making it
easier for users to find what they want. Lastly, with standards and templates, information
will not be accidentally omitted from a document. You CAN learn to create test case
templates, with little or no outside help. Get CAN get free information. Click on a link!
software fault?

A: Software faults are hidden programming errors. Software faults are errors in the
correctness of the semantics of computer programs.

software failure?

A: A software failure occurs when the software does not do what the user expects to see
difference between a software fault and a software failure?

A: A software failure occurs when the software does not do what the user expects to see.
A software fault, on the other hand, is a hidden programming error. A software fault
becomes a software failure only when the exact computation conditions are met, and the
faulty portion of the code is executed on the CPU. This can occur during normal usage.
Or, when the software is ported to a different hardware platform. Or, when the software is
ported to a different complier. Or, when the software gets extended
test engineer?

A: Test engineers are engineers who specialize in testing. We, test engineers, create test
cases, procedures, scripts and generate data. We execute test procedures and scripts,
analyze standards of measurements, evaluate results of system/integration/regression
testing.

role of test engineers?

A: Test engineers speed up the work of the development staff, and reduce the risk of your
company's legal liability. We, test engineers, also give the company the evidence that the
software is correct and operates properly. We also improve problem tracking and
reporting, maximize the value of the software, and the value of the devices that use it. We
also assure the successful launch of the product by discovering bugs and design flaws,
before...

users get discouraged, before shareholders loose their cool and before employees get
bogged down. We, test engineers help the work of software development staff, so the
development team can devote its time to build up the product. We, test engineers also
promote continual improvement. They provide documentation required by FDA, FAA,
other regulatory agencies, and your customers. We, test engineers save your company
money by discovering defects EARLY in the design process, before failures occur in
production, or in the field. We save the reputation of your company by discovering bugs
and design flaws, before bugs and design flaws damage the reputation of your company.

QA engineer?

A: QA engineers are test engineers, but QA engineers do more than just testing. Good
QA engineers understand the entire software development process and how it fits into the
business approach and the goals of the organization. Communication skills and the ability
to understand various sides of issues are important. We, QA engineers, are successful if
people listen to us, if people use our tests, if people think that we're useful, and if we're
happy doing our work. I would love to see QA departments staffed with experienced
software developers who coach development teams to write better code. But I've never
seen it. Instead of coaching, we, QA engineers, tend to be process people.

role of the QA engineer?

A: The QA Engineer's function is to use the system much like real users would, find all
the bugs, find ways to replicate the bugs, submit bug reports to the developers, and to
provide feedback to the developers, i.e. tell them if they've achieved the desired level of
quality.

responsibilities of a QA engineer?

A: Let's say, an engineer is hired for a small software company's QA role, and there is no
QA team. Should he take responsibility to set up a QA infrastructure/process, testing and
quality of the entire product? No, because taking this responsibility is a classic trap that
QA people get caught in. Why? Because we QA engineers cannot assure quality. And
because QA departments cannot create quality. What we CAN do is to detect lack of
quality, and prevent low-quality products from going out the door. What is the solution?
We need to drop the QA label, and tell the developers, they are responsible for the quality
of their own work. The problem is, sometimes, as soon as the developers learn that there
is a test department, they will slack off on their testing. We need to offer to help with
quality assessment only.

perform integration testing?

A: First, unit testing has to be completed. Upon completion of unit testing, integration
testing begins. Integration testing is black box testing. The purpose of integration testing
is to ensure distinct components of the application still work in accordance to customer
requirements. Test cases are developed with the express purpose of exercising the
interfaces between the components. This activity is carried out by the test team.
Integration testing is considered complete, when actual results and expected results are
either in line or differences are explainable/acceptable based on client input. You CAN
learn to perform integration testing, with little or no outside help. Get CAN get free
information. Click on a link!

How do test plan templates look like?

A: The test plan document template helps to generate test plan documents that describe
the objectives, scope, approach and focus of a software testing effort. Test document
templates are often in the form of documents that are divided into sections and
subsections. One example of this template is a 4-section document, where section 1 is the
description of the "Test Objective", section 2 is the the description of "Scope of Testing",
section 3 is the the description of the "Test Approach", and section 4 is the "Focus of the
Testing Effort". All documents should be written to a certain standard and template.
Standards and templates maintain document uniformity. They also help in learning where
information is located, making it easier for a user to find what they want. With standards
and templates, information will not be accidentally omitted from a document. Once Rob
Davis has learned and reviewed your standards and templates, he will use them. He will
also recommend improvements and/or additions. A software project test plan is a
document that describes the objectives, scope, approach and focus of a software testing
effort. The process of preparing a test plan is a useful way to think through the efforts
needed to validate the acceptability of a software product. The completed document will
help people outside the test group understand the why and how of product validation.
You CAN learn to generate test plan templates, with little or no outside help. Get CAN
get free information. Click on a link!

choose automated testing?

A: For larger projects, or ongoing long-term projects, automated testing can be valuable.
But for small projects, the time needed to learn and implement the automated testing
tools is usually not worthwhile. Automated testing tools sometimes do not make testing
easier. One problem with automated testing tools is that if there are continual changes to
the product being tested, the recordings have to be changed so often, that it becomes a
very time-consuming task to continuously update the scripts. Another problem with such
tools is the interpretation of the results (screens, data, logs, etc.) that can be a time-
consuming task. You CAN learn to use automated tools, with little or no outside help.
Get CAN get free information. Click on a link!

ratio of developers and testers?

A: This ratio is not a fixed one, but depends on what phase of the software development
life cycle the project is in. When a product is first conceived, organized, and developed,
this ratio tends to be 10:1, 5:1, or 3:1, i.e. heavily in favor of developers. In sharp
contrast, when the product is near the end of the software development life cycle, this
ratio tends to be 1:1, or even 1:2, in favor of testers.

your role in your current organization?

A: I'm a Software QA Engineer. I use the system much like real users would. I find all
the bugs, find ways to replicate the bugs, submit bug reports to developers, and provides
feedback to the developers, i.e. tell them if they've achieved the desired level of quality.

What software tools are in demand these days?

A: The software tools currently in demand include LabView, LoadRunner, Rational


Tools, and Winrunner -- and especially the Loadrunner and Rational Toolset -- but there
are many others, depending on the end client, and their needs, and preferences.

What other roles are in testing?

A: Depending on the organization, the following roles are more or less standard on most
testing projects: Testers, Test Engineers, Test/QA Team Leads, Test/QA Managers,
System Administrators, Database Administrators, Technical Analysts, Test Build
Managers, and Test Configuration Managers. Depending on the project, one person can
and often wear more than one hat. For instance, we Test Engineers often wear the hat of
Technical Analyst, Test Build Manager and Test Configuration Manager as well.
Which of these roles are the best and most popular?

A: As a yardstick of popularity, if we count the number of applicants and resumes, Tester


roles tend to be the most popular. Less popular roles are roles of System Administrators,
Test/QA Team Leads, and Test/QA Managers. The "best" job is the job that makes YOU
happy. The best job is the one that works for YOU, using the skills, resources, and talents
YOU have. To find the best job, you need to experiment, and "play" different roles.
Persistence, combined with experimentation, will lead to success.

What's the difference between efficient and effective?

A: "Efficient" means having a high ratio of output to input; working or producing with a
minimum of waste. For example, "An efficient engine saves gas". "Effective", on the
other hand, means producing, or capable of producing, an intended result, or having a
striking effect. For example, "For rapid long-distance transportation, the jet engine is
more effective than a witch's broomstick".

What is documentation change management?

A: Documentation change management is part of configuration management (CM). CM


covers the tools and processes used to control, coordinate and track code, requirements,
documentation, problems, change requests, designs, tools, compilers, libraries, patches,
changes made to them and who makes the changes. Rob Davis has had experience with a
full range of CM tools and concepts. Rob Davis can easily adapt to your software tool
and process needs.

What is up time?
A: Up time is the time period when a system is operational and in service. Up time is the
sum of busy time and idle time.

What is upwardly compatible software?

A: Upwardly compatible software is compatible with a later or more complex version of


itself. For example, an upwardly compatible software is able to handle files created by a
later version of itself.

What is upward compression?

A: In software design, upward compression means a form of demodularization, in which


a subordinate module is copied into the body of a superior module.
What is usability?

A: Usability means ease of use; the ease with which a user can learn to operate, prepare
inputs for, and interpret outputs of a software product.

What is user documentation?

A: User documentation is a document that describes the way a software product or


system should be used to obtain the desired results.

What is a user manual?

A: User manual is a document that presents information necessary to employ software or


a system to obtain the desired results. Typically, what is described are system and
component capabilities, limitations, options, permitted inputs, expected outputs, error
messages, and special instructions.

What is the difference between user documentation and user manual?

A: When a distinction is made between those who operate and use a computer system for
its intended purpose, a separate user documentation and user manual is created. Operators
get user documentation, and users get user manuals.

What is user friendly software?

A: A computer program is user friendly, when it is designed with ease of use, as one of
the primary objectives of its design.

What is a user friendly document?

A: A document is user friendly, when it is designed with ease of use, as one of the
primary objectives of its design.

What is a user guide?

A: User guide is the same as the user manual. It is a document that presents information
necessary to employ a system or component to obtain the desired results. Typically, what
is described are system and component capabilities, limitations, options, permitted inputs,
expected outputs, error messages, and special instructions.
What is user interface?

A: User interface is the interface between a human user and a computer system. It
enables the passage of information between a human user and hardware or software
components of a computer system.

What is a utility?

A: Utility is a software tool designed to perform some frequently used support function.
For example, a program to print files.

What is utilization?

A: Utilization is the ratio of time a system is busy, divided by the time it is available.
Uilization is a useful measure in evaluating computer performance.

Q146. What is variable trace?

A: Variable trace is a record of the names and values of variables accessed and changed
during the execution of a computer program.

Q147. What is value trace?

A: Value trace is same as variable trace. It is a record of the names and values of
variables accessed and changed during the execution of a computer program.

Q148. What is a variable?

A: Variables are data items whose values can change. For example: "capacitor_voltage".
There are local and global variables, and constants.

Q149. What is a variant?

A: Variants are versions of a program. Variants result from the application of software
diversity.

Q151. What is a software version?

A: A software version is an initial release (or re-release) of a software associated with a


complete compilation (or recompilation) of the software.

Q152. What is a document version?

A: A document version is an initial release (or complete a re-release) of a document, as


opposed to a revision resulting from issuing change pages to a previous release.

Q153. What is VDD?

A: VDD is an acronym. It stands for "version description document".

Q154. What is a version description document (VDD)?

A: Version description document (VDD) is a document that accompanies and identifies a


given version of a software product. Typically the VDD includes a description, and
identification of the software, identification of changes incorporated into this version, and
installation and operating information unique to this version of the software.
Q155. What is a vertical microinstruction?

A: A vertical microinstruction is a microinstruction that specifies one of a sequence of


operations needed to carry out a machine language instruction. Vertical microinstructions
are short, 12 to 24 bit instructions. They're called vertical because they are normally listed
vertically on a page. These 12 to 24 bit microinstructions instructions are required to
carry out a single machine language instruction. Besides vertical microinstructions, there
are horizontal as well as diagonal microinstructions as well.

Q156. What is a virtual address?

A: In virtual storage systems, virtual addresses are assigned to auxiliary storage locations.
They allow those location to be accessed as though they were part of the main storage.

Q157. What is virtual memory?

A: Virtual memory relates to virtual storage. In virtual storage, portions of a user's


program and data are placed in auxiliary storage, and the operating system automatically
swaps them in and out of main storage as needed.

Q158. What is virtual storage?

A: Virtual storage is a storage allocation technique, in which auxiliary storage can be


addressed as though it was part of main storage. Portions of a user's program and data are
placed in auxiliary storage, and the operating system automatically swaps them in and out
of main storage as needed.

Q159
Q160. What is the waterfall model?

A: Waterfall is a model of the software development process in which the concept phase,
requirements phase, design phase, implementation phase, test phase, installation phase,
and checkout phase are performed in that order, probably with overlap, but with little or
no iteration.

Q161. What are the phases of the software development process?

A: The software development process consists of the concept phase, requirements phase,
design phase, implementation phase, test phase, installation phase, and checkout phase.

Q162. What models are used in software development?

A: In software development process the following models are used: waterfall model,
incremental development model, rapid prototyping model, and spiral model.

Q164. Can you give me more information on software QA/testing, from a tester's point
of view?

A: Yes, I can. You can visit my web site, and on pages www.robdavispe.com/free and
www.robdavispe.com/free2 you can find answers to many questions on software QA,
documentation, and software testing, from a tester's point of view. As to questions and
answers that are not on my web site now, please be patient, as I am going to add more
answers, as soon as time permits.

Q167. What types of testing can you tell me about?

A: Each of the followings represents a different type of testing approach: black box
testing, white box testing, unit testing, incremental testing, integration testing, functional
testing, system testing, end-to-end testing, sanity testing, regression testing, acceptance
testing, load testing, performance testing, usability testing, install/uninstall testing,
recovery testing, security testing, compatibility testing, exploratory testing, ad-hoc
testing, user acceptance testing, comparison testing, alpha testing, beta testing, and
mutation testing.
Q169. How do you conduct peer reviews?

A: The peer review, sometimes called PDR, is a formal meeting, more formalized than a
walk-through, and typically consists of 3-10 people including a test lead, task lead (the
author of whatever is being reviewed), and a facilitator (to make notes). The subject of
the PDR is typically a code block, release, feature, or document, e.g. requirements
document or test plan. The purpose of the PDR is to find problems and see what is
missing, not to fix anything. The result of the meeting should be documented in a written
report. Attendees should prepare for this type of meeting by reading through documents,
before the meeting starts; most problems are found during this preparation. Preparation
for PDRs is difficult, but is one of the most cost-effective methods of ensuring quality,
since bug prevention is more cost effective than bug detection.

Q170. How do you check the security of your application?

A: To check the security of an application, we can use security/penetration testing.


Security/penetration testing is testing how well the system is protected against
unauthorized internal or external access, or willful damage. This type of testing usually
requires sophisticated testing techniques.
Q171. How do you test the password field?

A: To test the password field, we do boundary value testing.

Q172. When testing the password field, what is your focus?

A: When testing the password field, one needs to verify that passwords are encrypted.

Q174. What is the objective of regression testing?

A: The objective of regression testing is to test that the fixes have not created any other
problems elsewhere. In other words, the objective is to ensure the software has remained
intact. A baseline set of data and scripts are maintained and executed, to verify that
changes introduced during the release have not "undone" any previous code. Expected
results from the baseline are compared to results of the software under test. All
discrepancies are highlighted and accounted for, before testing proceeds to the next level.

Q175. What types of white box testing can you tell me about?

A: White box testing is a testing approach that examines the application's program
structure, and derives test cases from the application's program logic. Clear box testing is
a white box type of testing. Glass box testing is also a white box type of testing. Open
box testing is also a white box type of testing.

Q176. What types of black box testing can you tell me about?

A: Black box testing is functional testing, not based on any knowledge of internal
software design or code. Black box testing is based on requirements and functionality.
Functional testing is also a black-box type of testing geared to functional requirements of
an application. System testing is also a black box type of testing. Acceptance testing is
also a black box type of testing. Functional testing is also a black box type of testing.
Closed box testing is also a black box type of testing. Integration testing is also a black
box type of testing.

Q177. Is the regression testing performed manually?

A: It depends on the initial testing approach. If the initial testing approach is manual
testing, then, usually the regression testing is performed manually. Conversely, if the
initial testing approach is automated

testing, then, usually the regression testing is performed by automated testing.

Q181. What is your view of software QA/testing?

A: Software QA/testing is easy, if requirements are solid, clear, complete, detailed,


cohesive, attainable and testable, if schedules are realistic, and if there is good
communication. Software QA/testing is a piece of cake, if project schedules are realistic,
if adequate time is allowed for planning, design, testing, bug fixing, re-testing, changes,
and documentation. Software QA/testing is easy, if testing is started early on, if fixes or
changes are re-tested, and if sufficient time is planned for both testing and bug fixing.
Software QA/testing is easy, if new features are avoided, if one is able to stick to initial
requirements as much as possible.

Q188. When is a process repeatable?

A: If we use detailed and well-written processes and procedures, we ensure the correct
steps are being executed. This facilitates a successful completion of a task. This is a way
we also ensure a process is repeatable.

Q189. What does a Test Strategy Document contain?

A: The test strategy document is a formal description of how a software product will be
tested. A test strategy is developed for all levels of testing, as required. The test team
analyzes the requirements, writes the test strategy and reviews the plan with the project
team. The test plan may include test cases, conditions, the test environment, and a list of
related tasks, pass/fail criteria and risk assessment. Additional sections in the test strategy
document include: A description of the required hardware and software components,
including test tools. This information comes from the test environment, including test tool
data. A description of roles and responsibilities of the resources required for the test and
schedule constraints. This information comes from man-hours and schedules. Testing
methodology. This is based on known standards. Functional and technical requirements
of the application. This information comes from requirements, change request, technical,
and functional design documents. Requirements that the system cannot provide, e.g.
system limitations.

Q190. What is test methodology?

A: One test methodology is a three-step process. Creating a test strategy, Creating a test
plan/design, and Executing tests. This methodology can be used and molded to your
organization's needs. Rob Davis believes that using this methodology is important in the
development and ongoing maintenance of his customers' applications.

SQA activities - suggesting & reviewing the process documents


Example - reviewing project management plan, etc
SQA activities are to monitor the complete software development PROCESS. It
includes
continuous improvements and enhancements of the overall development PROCESS.
How can we perform testing without expected results?
Main concept in testing is the expected result. By knowing the expected behavior of
the
application or system from SRS and FDS, we can derive a test case. When executed
the
derived test case's, actual result is noted. Any deviation from the expected is
considered as a
defect.
In Adhoc testing ,there is no need of a test case, but if we want to log a defect ,we
should
know the expected behavior of the application or a system.
There is only one possibility for this question according to me. Exploratory testing.
An interactive process of concurrent product exploration, test design, and test
execution. The
heart of exploratory testing can be stated simply. The outcome of this test influences
the
design of the next test. Tester will explore the product or application and will note
down the
expected result and will design a test case and execute the test.
Q How do you conduct boundary analysis testing for "ok" push button
Ans: For Ok and submit buttons we can follow below testing -
1. Proper URL of Window is opening or not
2. User Interface testing
####What is an exit and entry criteria in a Test Plan ?
generally, test plan document is prepared by TL&QM.
Entry &Exit criteria is part of test plan document
Entry criteria is
1.testing environment established
2.test cases prepared
3.build received from development team
Exit criteria is
1.All modules are covered or not
2.all test cases are completely executed or not
3.all bugs resolved or not
this is part of HOW TO TEST criteria test plan document
According to me entry crieria is that when it comes to testing deparment for testing
with unit
test checklist and exit crieria is when all planned test cases gets executed and no
bugs
remains in the system on the basis of these test cases.
Who writes the Business requirements? What you do when you have the BRD?
Business Analyst writes the Business Requirements. In a large commercial software
development environment main actors within the life cycle of a software project are
as
follows: Project Stakeholder(Sponsor)Project Manager (manages the entire
project)QA
Manager Business, Analyst Software, Architect Business Analyst after finalizing the
scope of
the project with the Project Stakeholders(sponsors) writes all the business
requirements(BR)
for the software application and then send it to software architect(SA) for
validation. SA after
understanding the BR make use cases and UML diagrams and forward them to QA
manager.
QA manager based on scope, BR, use case & UML diagram writes the overall
testing strategy
& high level test cases and forward them to testers. Project manager monitors the
complete
project. QA manager, Business Analyst & Software architect reports project
manager whereas
project manager reports to project stakeholder.
What we normally check for in the Database Testing?
In DB Testing, basically all the events like Add, Delete, Update, Modify etc
I will tell y one example for Test Engineer, QA, QC
Take an Examination Center: In that
---> Test Engineer is Examiner
----> QA is sitting squad
---> QC is a flying squad
Testing: What are the key elements for creating test plan
The key elements for a test plan are:
1.Entrance Criteria: The requirement documents based on which the plan is
developed. The
BRD, FRD
2.Test environment
3.Test data
How do you ensure the quality of the product?
The quality of the product can be ensured by look into the minimum bugs in a
product as per
the standard maintained by the organization for the clients. That means if a
company is the
six sigma oriented company then there should be at least 3-4 bugs per millions
What is the job of Quality assurance engineer? Difference between the testing &
Quality Assurance job.
A Quality Assurance Engineer is one who understands the SDLC Process. He\she
has a 'test to
break attitude', an ability to take the point of view of the customer, a strong desire
for qly
and an attention to detail. Communication skills and the ability to understand
various sides of
issues are important.
What is the role of QA in a project development?
The role of QA rank parallel to the Project Development in simulating the
Framework of
Testing as per the requirements followed with Paperwork results for the Test
Planning,
Design Test cases n writing the Test cases subsequent to this, Implementing the
Metrics,
Delivery of the Product as per the Deadlines Etc.,
How can u test the white page?
You can check whether the dimensions of the paper and print jobs are from the
requirements document. Canvas sizing, resizing, window minimize and maximize
functions,
Line scroll and Page scroll are other areas to test
What is the role of QA in a company that produces software?
QA is responsible for managing, implementing, maintaining and continuously
improving the
Processes the Company and enable internal projects towards process maturity and
facilitate
process improvements and innovations in the organization.
Tester is responsible for carrying the testing effort in the company.
In many companies QA position is for doing the role of both Testing and creating
and improving
the processes.

In general, how do you see automation fitting into the overall process of testing?
Using automation method is adopted if our process is repeatable in nature and the
time taken
to do manual ways are not efficient and economical. We adopt the testing to be done
using
automation tools either already available or developed own script/software to do the
testing
process
4. What is the Outcome of Testing?
A. The outcome of testing will be a stable application which meets the customer
Req's.
5. What kind of testing have you done?
A. Usability, Functionality, System testing, regression testing, UAT
(it depends on the person).
6. What is the need for testing?
A. The Primary need is to match requirements get satisfied with the
functionality
and also to answer two questions
1· Whether the system is doing what it supposes to do?
2· Whether the system is not performing what it is not suppose to do?
7. What are the entry criteria for Functionality and Performance testing?
A. Entry criteria for Functionality testing is Functional Specification /BRS
(CRS)/User Manual. An integrated application, Stable for testing.
Entry criteria for Performance testing is successfully of functional testing,
once all the requirements related to functional are covered and tested, and
approved or validated.
9. Why do you go for White box testing, when Black box testing is available?
A. A benchmark that certifies Commercial (Business) aspects and also functional
(technical) aspects is objectives of black box testing. Here loops, structures,
arrays, conditions, files, etc are very micro level but they arc Basement for
any application, So White box takes these things in Macro level and test these
things
Even though Black box testing is available, we should go for White box testing
also, to check the correctness of code and for integrating the modules.
36.What are the different types of testing techniques?
A. 1.white Box testing 2.Black Box testing.
37.What are the different types of test case techniques?
A. 1.Equilance Partition. 2.Boundary Value Analysis. 3.Error guesing.
38.What are the risks involved in testing?
48.What is the difference between unit testing and integration testing?
A. Unit Testing:It is a testing activity typically done by the developers not by testers,as it
requires
detailed knowledge of the internal program design and code. Not always easily done
unless the
application has a well-designed architecture with tight code.
integration testing:testing of combined parts of an application to determine if they
function
together correctly. The 'parts' can be code modules, individual applications,client and
server
applications on a network, etc. This type of testing is especially relevant to client/server
and
distributed systems.

57. What is the Diff between SIT & IST?


1-how do you write testcases
3-what are the major bugs
4- how do u report
5-what is the objective of the application
and many more....

QA interview - testing toaster


How would you test a toaster, vending machine, electric kettle or ATM? Some QA
managers consider this most common interview question as perfect QA interview
question. In the same time the test interview question is the easiest one. QA engineer
needs to tell just two magic words "requirements" and "specifications" and then approach
toaster tests as any other usual application under test with different testing types like
functionality testing, usability, white box testing, black box testing, performance testing
and so on.

How would you test a toaster?


Interview puzzles and riddles is the long going tradition in the software development
companies like Google, Facebook and Microsoft. Personally I would not make hiring
decision for software test engineer position based on the puzzle interview question like
"How would you test a toaster, vending machine, electric kettle, pencil or other
appliance?", but some hiring QA managers still think that they can separate the good
SQA Engineer from the bad using interview riddles.

Let's say as part of the interview process for Test Engineer position after you successfully
answered the interview question about creating test automation framework based on
Selenium , explained the difference between white and black box testing, with genuine
expression in your voice recited agile manifesto principles, even solved the interview
puzzle about one hundred prisoners and when you feel that you almost hired as tester, the
interviewer asked you how would you test a toaster.

One more QA manager interview question could help to analyze the management
abilities of candidate for QA manager position. This interview questions especially
critical for the QA teams working in Agile environment.
In certain domains, there is some amount of testing that cannot reasonably be automated
in the proper time. This testing requires that QA Engineer eyeballs carefully look at the
screen and work through the application under test. It isn't necessary for these QA
Engineer to be developers, in fact, it might be better if they aren't programmers, since
developers view the world differently than most people. You may want non-programmers
QA Engineer for the following mundane tests:

* UI testing
* Usability testing
* Internationalization testing

In the same time QA Manager definitely wants your testers to have the coding skills and
be developers when doing the following kinds of tests:

* White box testing


* API testing
* Performance testing
* Database testing

How would you deal with smarty pants SQA Engineer


This is a perfect QA manager interview question. Software quality assurance job market
took a dramatic hit early this year, some signs of stabilization are appearing. A few
recruiters believe layoffs have slowed. In the same time companies are not ready to begin
hiring full-time testers, but many companies started to hire the contract workers to handle
quality assurance projects that had been suspended last year. Hiring tester as contractor is
always a big hurdle for QA manager, the proper answer on the dealing with smarty pants
SQA Engineer would demonstrate management abilities.

There is the carrot and stick approach on resolving the work performance of tester, who
instead of doing requested work tries to dig into the code and fix developers bugs in the
application code. The carrot approach requires convincing QA Engineer to do something
meaningful about the software application quality like discovering the weak point in
application logic, set up testing environment for hard to test features, decide when the
testing should be completed and finally sign off the production for production. QA
Manager could also use the stick approach by asking QA Engineer serious questions
about functionality of the new features, testing coverage, number of high severity bug
logged, number of usefully executed test cases and when the application under test would
be ready for production.

What are all the major processes involved in testing


Could you test a program 100%? 90%
How would you test a mug
What is the other name for white box testing
What is other name for water fall model
What is considered a good test
Why is it often hard for management to get serious about quality assurance
How can new Software QA processes be introduced in an existing organization
What steps are needed to develop and run software tests

What if the application has functionality that wasn't in the requirements


How can Software QA processes be implemented without stifling productivity

What if an organization is growing so fast that fixed QA processes are impossible


How does a client/server environment affect testing
What is the testing process?

Manual and automation experience

How many test cases you have written in your model

For exmaple out of 100 test cases if I ask you to automate how many you can
automate

What features you have frequently used in WR

Role your self in WR

Few TSL functions in WR

You have any idea about back end testing


Can you differ data flow and control flow

Can you perform database testing tell me some queries

Tell me about parameterizing data

What is CM Plan

Explain Test Case Template

How will you prepare the Test Plan

What is GUI Testing and Functionality Testing

Write a query to fetch the data is the table such as Employee Table and Dept Table
Can you test DB using WR

Once a new build comes to Quality Team, what they will do

What kinds of testing do you know? What is it system testing What is it integration
testing What is a unit testing What is a regression testing
You theoretical background and home work may shine in this question. System testing is
a testing of ...

What are all the major processes will involve in testing


: The major processes include:
1.Planning (test strategy, test objectives, risk management) <B...

How would you conduct your test


Each test is based on the technical requirements of the software product.

How is testing affected by object-oriented designs


Well-engineered object-oriented design can make it easier to trace from code to internal
design to f...

What are all the basic strategies for dealing with new code
.Start with obvious and simple test
.Test each function sympathetically
.Test broadly befo...

What are all the main actions which will be taken by the project manager for testing
a product
1) Assess risks for the project as a whole
2) Assess the risk associated with the testing sub-p...

What are all the important factors want to be trade-off when building a product
1. Time to market
2. Cost to market
3. Reliability of delivered product
4. Feature se...

.What are all the favorite risks will be arised during the project plan
Are there fixed dates that must be met for milestones or components of the product?\
.How likel...