You are on page 1of 40

Q1: What testing approaches can you tell me about?

A: Each of the followings represents a different testing approach: black box testing, white box
testing, unit testing, incremental testing, integration testing, functional testing, system testing,
end-to-end testing, sanity testing, regression testing, acceptance testing, load testing,
performance testing, usability testing, install/uninstall testing, recovery testing, security testing,
compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison
testing, alpha testing, beta testing, and mutation testing.
Q2: What is stress testing?
A: Stress testing is testing that investigates the behavior of software (and hardware) under
extraordinary operating conditions.

For example, when a web server is stress tested, testing aims to find out how many users can be
on-line, at the same time, without crashing the server. Stress testing tests the stability of a given
system or entity.

Stress testing tests something beyond its normal operational capacity, in order to observe any
negative results. For example, a web server is stress tested, using scripts, bots, and various
denial of service tools.
Q3: What is load testing?
A: Load testing simulates the expected usage of a software program, by simulating multiple users
that access the program's services concurrently. Load testing is most useful and most relevant for
multi-user systems, client/server models, including web servers.

For example, the load placed on the system is increased above normal usage patterns, in order
to test the system's response at peak loads.
Q4: What is the difference between stress testing and load
testing?
A: Load testing generally stops short of stress testing.

During stress testing, the load is so great that the expected results are errors, though there is
gray area in between stress testing and load testing.

Load testing is a blanket term that is used in many different ways across the professional
software testing community.

The term, load testing, is often used synonymously with stress testing, performance testing,
reliability testing, and volume testing.
Q5: What is the difference between performance testing and load
testing?
A: Load testing is a blanket term that is used in many different ways across the professional
software testing community. The term, load testing, is often used synonymously with stress
testing, performance testing, reliability testing, and volume testing. Load testing generally stops
short of stress testing. During stress testing, the load is so great that errors are the expected
results, though there is gray area in between stress testing and load testing.
Q6: What is the difference between reliability testing and load
testing?
A: Load testing is a blanket term that is used in many different ways across the professional
software testing community. The term, load testing, is often used synonymously with stress
testing, performance testing, reliability testing, and volume testing. Load testing generally stops
short of stress testing. During stress testing, the load is so great that errors are the expected
results, though there is gray area in between stress testing and load testing.
Q7: What is automated testing?
A: Automated testing is a formally specified and controlled method of formal testing approach.

Q8: What is the difference between volume testing and load


testing?
A: Load testing is a blanket term that is used in many different ways across the professional
software testing community. The term, load testing, is often used synonymously with stress
testing, performance testing, reliability testing, and volume testing. Load testing generally stops
short of stress testing. During stress testing, the load is so great that errors are the expected
results, though there is gray area in between stress testing and load testing.
Q9: What is incremental testing?
A: Incremental testing is partial testing of an incomplete product. The goal of incremental testing
is to provide an early feedback to software developers.
Q10: What is software testing?
A: Software testing is a process that identifies the correctness, completeness, and quality of
software. Actually, testing cannot establish the correctness of software. It can find defects, but
cannot prove there are no defects.
Q11: What is alpha testing?
A: Alpha testing is final testing before the software is released to the general public. First, (and
this is called the first phase of alpha testing), the software is tested by in-house developers. They
use either debugger software, or hardware-assisted debuggers. The goal is to catch bugs quickly.

Then, (and this is called second stage of alpha testing), the software is handed over to software
QA staff for additional testing in an environment that is similar to the intended use.
Q12: What is beta testing?
A: Following alpha testing, "beta versions" of the software are released to a group of people, and
limited public tests are performed, so that further testing can ensure the product has few bugs.

Other times, beta versions are made available to the general public, in order to receive as much
feedback as possible. The goal is to benefit the maximum number of future users.
Q13: What is the difference between alpha and beta testing?
A: Alpha testing is performed by in-house developers and software QA personnel. Beta testing is
performed by the public, a few select prospective customers, or the general public.
Q14: What is gamma testing?
A: Gamma testing is testing of software that has all the required features, but it did not go through
all the in-house quality checks. Cynics tend to refer to software releases as "gamma testing".
Q15: What is boundary value analysis?
A: Boundary value analysis is a technique for test data selection. A test engineer chooses values
that lie along data extremes. Boundary values include maximum, minimum, just inside
boundaries, just outside boundaries, typical values, and error values. The expectation is that, if a
systems works correctly for these extreme or special values, then it will work correctly for all
values in between. An effective way to test code, is to exercise it at its natural boundaries.
Q16: What is ad hoc testing?
A: Ad hoc testing is a testing approach; it is the least formal testing approach.
Q17: What is clear box testing?
A: Clear box testing is the same as white box testing. It is a testing approach that examines the
application's program structure, and derives test cases from the application's program logic.
Q18: What is glass box testing?
A: Glass box testing is the same as white box testing. It is a testing approach that examines the
application's program structure, and derives test cases from the application's program logic.
Q19: What is open box testing?
A: Open box testing is the same as white box testing. It is a testing approach that examines the
application's program structure, and derives test cases from the application's program logic.
Q20: What is black box testing?
A: Black box testing a type of testing that considers only externally visible behavior. Black box
testing considers neither the code itself, nor the "inner workings" of the software.
Q21: What is functional testing?
A: Functional testing is the same as black box testing. Black box testing a type of testing that
considers only externally visible behavior. Black box testing considers neither the code itself, nor
the "inner workings" of the software.
Q22: What is closed box testing?
A: Closed box testing is the same as black box testing. Black box testing a type of testing that
considers only externally visible behavior. Black box testing considers neither the code itself, nor
the "inner workings" of the software.
Q23: What is bottom-up testing?
A: Bottom-up testing is a technique for integration testing. A test engineer creates and uses test
drivers for components that have not yet been developed, because, with bottom-up testing, low-
level components are tested first. The objective of bottom-up testing is to call low-level
components, for testing purposes.
Q24: What is software quality?
A: The quality of the software does vary widely from system to system. Some common quality
attributes are stability, usability, reliability, portability, and maintainability. See quality standard ISO
9126 for more information on this subject.

Q25: What is software fault?


A: A software fault is a hidden programming error. A software fault fault is an error in the
correctness of the semantics of a computer program

Q26: What is software failure?


A: Software failure occurs when the software does not do what the user expects to see.

Q27: What is the difference between a software fault and software


failure?
A: A software failure occurs when the software does not do what the user expects to see.
Software faults, on the other hand, are hidden programming errors. Software faults become
software failures only when the exact computation conditions are met, and the faulty portion of
the code is executed on the CPU. This can occur during normal usage. Other times it occurs
when the software is ported to a different hardware platform, or, when the software is ported to a
different complier, or, when the software gets extended.

Q28: What is a test engineer?


A: We, test engineers, are engineers who specialize in testing. We create test cases, procedures,
scripts and generate data. We execute test procedures and scripts, analyze standards of
measurements, evaluate results of system/integration/regression testing.

Q29: What is a QA engineer?


A: QA engineers are test engineer, but they do more than just testing. Good QA engineers
understand the entire software development process and how it fits into the business approach
and the goals of the organization.

Communication skills and the ability to understand various sides of issues are important. A QA
engineer is successful if people listen to him, if people use his tests, if people think that he's
useful, and if he's happy doing his work.

I would love to see QA departments staffed with experienced software developers who coach
development teams to write better code. But I've never seen it. Instead of coaching, QA engineers
tend to be process people.

Q30: How do test case templates look like?


A: Software test cases are documents that describe inputs, actions, or events and their expected
results, in order to determine if all features of an application are working correctly.

A software test case template is, for example, a 6-column table, where column 1 is the "Test case
ID number", column 2 is the "Test case name", column 3 is the "Test objective", column 4 is the
"Test conditions/setup", column 5 is the "Input data requirements/steps", and column 6 is the
"Expected results".
All documents should be written to a certain standard and template. Standards and templates
maintain document uniformity. It also helps in learning where information is located, making it
easier for a user to find what they want. Lastly, with standards and templates, information will not
be accidentally omitted from a document.

Q31: What is the role of the test engineer?


A: We, test engineers, speed up the work of the development staff, and reduce the risk of your
company's legal liability.

We also give your company the evidence that the software is correct and operates properly.

We, test engineers, improve problem tracking and reporting, maximize the value of the software,
and the value of the devices that use it.

We, test engineers, assure the successful launch of the product by discovering bugs and design
flaws, before users get discouraged, before shareholders loose their cool and before employees
get bogged down.

We, test engineers, help the work of the software development staff, so the development team
can devote its time to build up the product.

We, test engineers, promote continual improvement.

We provide documentation required by FDA, FAA, other regulatory agencies, and your
customers.

We, test engineers, save your company money by discovering defects EARLY in the design
process, before failures occur in production, or in the field. We save the reputation of your
company by discovering bugs and design flaws, before bugs and design flaws damage the
reputation of your company.

Q32: What are the QA engineer's responsibilities?


A: Let's say, an engineer is hired for a small software company's QA role, and there is no QA
team. Should he take responsibility to set up a QA infrastructure/process, testing and quality of
the entire product? No, because taking this responsibility is a classic trap that QA people get
caught in. Why? Because we QA engineers cannot assure quality. And because QA departments
cannot create quality.

What we CAN do is to detect lack of quality, and prevent low-quality products from going out the
door. What is the solution? We need to drop the QA label, and tell the developers that they are
responsible for the quality of their own work. The problem is, sometimes, as soon as the
developers learn that there is a test department, they will slack off on their testing. We need to
offer to help with quality assessment, only.

Q33: What metrics can be used in for software development?


A: Metrics refer to statistical process control. The idea of statistical process control is a great one,
but it has only a limited use in software development.

On the negative side, statistical process control works only with processes that are sufficiently
well defined AND unvaried, so that they can be analyzed in terms of statistics. The problem is,
most software development projects are NOT sufficiently well defined and NOT sufficiently
unvaried.

On the positive side, one CAN use statistics. Statistics are excellent tools that project managers
can use. Statistics can be used, for example, to determine when to stop testing, i.e. test cases
completed with certain percentage passed, or when bug rate falls below a certain level. But, if
these are project management tools, why should we label them quality assurance tools?

Q34: What is role of the QA engineer?


A: The QA Engineer's function is to use the system much like real users would, find all the bugs,
find ways to replicate the bugs, submit bug reports to the developers, and to provide feedback to
the developers, i.e. tell them if they've achieved the desired level of quality.

Q35: What metrics can be used for bug tracking?


A: Metrics that can be used for bug tracking include the total number of bugs, total number of
bugs that have been fixed, number of new bugs per week, and number of fixes per week.

Other metrics in quality assurance include...

McCabe metrics: cyclomatic complexity metric (v(G)), actual complexity metric (AC), module
design complexity metric (iv(G)), essential complexity metric (ev(G)), pathological complexity
metric (pv(G)), design complexity metric (S0), integration complexity metric (S1), object
integration complexity metric (OS1), global data complexity metric (gdv(G)), data complexity
metric (DV), tested data complexity metric (TDV), data reference metric (DR), tested data
reference metric (TDR), maintenance severity metric (maint_severity), data reference severity
metric (DR_severity), data complexity severity metric (DV_severity), global data severity metric
(gdv_severity)

Q36: What metrics can be used for bug tracking? (Cont'd...)


McCabe object-oriented software metrics: encapsulation percent public data (PCTPUB),
access to public data (PUBDATA), polymorphism percent of unoverloaded calls (PCTCALL),
number of roots (ROOTCNT), fan-in (FANIN), quality maximum v(G) (MAXV), maximum ev(G)
(MAXEV), and hierarchy quality (QUAL).

Other object-oriented software metrics: depth (DEPTH), lack of cohesion of methods (LOCM),
number of children (NOC), response for a class (RFC), weighted methods per class (WMC),
Halstead software metrics program length, program volume, program level and program difficulty,
intelligent content, programming effort, error estimate, and programming time.

Line count software metrics: lines of code, lines of comment, lines of mixed code and
comments, and lines left blank.

Q37: How do you perform integration testing?


A: First, unit testing has to be completed. Upon completion of unit testing, integration testing
begins. Integration testing is black box testing. The purpose of integration testing is to ensure
distinct components of the application still work in accordance to customer requirements.
Test cases are developed with the express purpose of exercising the interfaces between the
components. This activity is carried out by the test team.

Integration testing is considered complete, when actual results and expected results are either in
line or differences are explainable/acceptable based on client input.

Q38: What metrics are used for test report generation?


A: Metrics that can be used for test report generation include...

McCabe metrics: Cyclomatic complexity metric (v(G)), Actual complexity metric (AC), Module
design complexity metric (iv(G)), Essential complexity metric (ev(G)), Pathological complexity
metric (pv(G)), design complexity metric (S0), Integration complexity metric (S1), Object
integration complexity metric (OS1), Global data complexity metric (gdv(G)), Data complexity
metric (DV), Tested data complexity metric (TDV), Data reference metric (DR), Tested data
reference metric (TDR), Maintenance severity metric (maint_severity), Data reference severity
metric (DR_severity), Data complexity severity metric (DV_severity), Global data severity metric
(gdv_severity).

McCabe object oriented software metrics: Encapsulation percent public data (PCTPUB), and
Access to public data (PUBDATA), Polymorphism percent of unoverloaded calls (PCTCALL),
Number of roots (ROOTCNT), Fan-in (FANIN), quality maximum v(G) (MAXV), Maximum ev(G)
(MAXEV), and Hierarchy quality(QUAL).

Other object oriented software metrics: Depth (DEPTH), Lack of cohesion of methods
(LOCM), Number of children (NOC), Response for a class (RFC), Weighted methods per class
(WMC), Halstead software metrics program length, Program volume, Program level and program
difficulty, Intelligent content, Programming effort, Error estimate, and Programming time.

Line count software metrics: Lines of code, Lines of comment, Lines of mixed code and
comments, and Lines left blank.
Q39: What is the "bug life cycle"?
A: Bug life cycles are similar to software development life cycles. At any time during the software
development life cycle errors can be made during the gathering of requirements, requirements
analysis, functional design, internal design, documentation planning, document preparation,
coding, unit testing, test planning, integration, testing, maintenance, updates, re-testing and
phase-out.

Bug life cycle begins when a programmer, software developer, or architect makes a mistake,
creates an unintentional software defect, i.e. a bug, and ends when the bug is fixed, and the bug
is no longer in existence.

What should be done after a bug is found? When a bug is found, it needs to be communicated
and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested.

Additionally, determinations should be made regarding requirements, software, hardware, safety


impact, etc., for regression testing to check the fixes didn't create other problems elsewhere. If a
problem-tracking system is in place, it should encapsulate these determinations.

A variety of commercial, problem-tracking/management software tools are available. These tools,


with the detailed input of software test engineers, will give the team complete information so
developers can understand the bug, get an idea of its severity, reproduce it and fix it
Q40: What is integration testing?
A: Integration testing is black box testing. The purpose of integration testing is to ensure distinct
components of the application still work in accordance to customer requirements. Test cases are
developed with the express purpose of exercising the interfaces between the components. This
activity is carried out by the test team.

Integration testing is considered complete, when actual results and expected results are either in
line or differences are explainable / acceptable, based on client input.

Q41: How do test plan templates look like?


A: The test plan document template describes the objectives, scope, approach and focus of a
software testing effort.

Test document templates are often in the form of documents that are divided into sections and
subsections. One example of this template is a 4-section document, where section 1 is the "Test
Objective", section 2 is the "Scope of Testing", section 3 is the "Test Approach", and section 4 is
the "Focus of the Testing Effort".

All documents should be written to a certain standard and template. Standards and templates
maintain document uniformity. It also helps in learning where information is located, making it
easier for a user to find what they want. With standards and templates, information will not be
accidentally omitted from a document.

Q42: What is a software project test plan?


A: A software project test plan is a document that describes the objectives, scope, approach and
focus of a software testing effort. The process of preparing a test plan is a useful way to think
through the efforts needed to validate the acceptability of a software product.

The completed document will help people outside the test group understand the why and how of
product validation. It should be thorough enough to be useful, but not so thorough that no one
outside the test group will be able to read it.

Q43: When do you choose automated testing?


A: For larger projects, or ongoing long-term projects, automated testing can be valuable. But for
small projects, the time needed to learn and implement the automated testing tools is usually not
worthwhile.

Automated testing tools sometimes do not make testing easier. One problem with automated
testing tools is that if there are continual changes to the product being tested, the recordings have
to be changed so often, that it becomes a very time-consuming task to continuously update the
scripts.

Another problem with such tools is the interpretation of the results (screens, data, logs, etc.) that
can be a time-consuming task.

Q44: What's the ratio between developers and testers?


A: This ratio is not a fixed one, but depends on what phase of the software development life cycle
the project is in. When a product is first conceived, organized, and developed, this ratio tends to
be 10:1, 5:1, or 3:1, i.e. heavily in favor of developers. In sharp contrast, when the software is
near the end of alpha testing, this ratio tends to be 1:1 or 1:2, in favor of testers.

Q45: What is your role in your current organization?


A: I'm a QA Engineer. The QA Engineer's function is to use the system much like real users
would, find all the bugs, find ways to replicate the bugs, submit bug reports to the developers, and
to provide feedback to the developers, i.e. tell them if they've achieved the desired level of quality.

Q46: How can I learn to use WinRunner, without any outside help?
A: I suggest you read all you can, and that includes reading product description pamphlets,
manuals, books, information on the Internet, and whatever information you can lay your hands
on. Then the next step is actual practice, the gathering of hands-on experience on how to use
WinRunner.

If there is a will, there is a way. You CAN do it, if you put your mind to it. You CAN learn to use
WinRunner, with little or no outside help.

Q47: Should I take a course in manual testing?


A: Yes, you want to consider taking a course in manual testing. Why? Because learning how to
perform manual testing is an important part of one's education. Unless you have a significant
personal reason for not taking a course, you do not want to skip an important part of an academic
program.

Q48: To learn to use WinRunner, should I sign up for a course at a


nearby educational institution?
A: Free, or inexpensive, education is often provided on the job, by an employer, while one is
getting paid to do a job that requires the use of WinRunner and many other software testing tools.

In lieu of a job, it is often a good idea to sign up for courses at nearby educational institutes.
Classes, especially non-degree courses in community colleges, tend to be inexpensive.

Q49: How can I become a good tester? I have little or no money.


A: The cheapest i.e. "free education" is often provided on the job, by an employer, while one is
getting paid to do a testing job, where one is able to use many different software testing tools.

Q50: What software tools are in demand these days?


A: There is no good answer to this question. The answer to this question can and will change
from day to day. What is in demand today, is not necessarily in demand tomorrow.

To give you some recent examples, some of the software tools on end clients' lists of
requirements include LabView, LoadRunner, Rational Tools and Winrunner.

But, as a general rule of thumb, there are many-many other items on their lists, depending on the
end client, their needs and preferences.

It is worth repeating... the answer to this question can and will change from one day to the next.
What is in demand today will not likely be in demand tomorrow.

Q51: Which of these tools should I learn?


A: I suggest you learn some of the most popular software tools (e.g. WinRunner, LoadRunner,
LabView, and Rational Rose, etc.) with special attention paid to the Rational Toolset and
LoadRunner

Q52: What is software configuration management?


A: Software Configuration management (SCM) relates to Configuration Management (CM).

SCM is the control, and the recording of, changes that are made to the software and
documentation throughout the software development life cycle (SDLC).

SCM covers the tools and processes used to control, coordinate and track code, requirements,
documentation, problems, change requests, designs, tools, compilers, libraries, patches, and
changes made to them, and to keep track of who makes the changes.

We, test engineers have experience with a full range of CM tools and concepts, and can easily
adapt to an organization's software tool and process needs.

Q53: What are some of the software configuration management


tools?
A: Software configuration management tools include Rational ClearCase, DOORS, PVCS, CVS;
and there are many others. Rational ClearCase is a popular software tool, made by Rational
Software, for revision control of source code.

DOORS, or "Dynamic Object Oriented Requirements System", is a requirements version control


software tool.

CVS, or "Concurrent Version System", is a popular, open source version control system to keep
track of changes in documents associated with software projects. CVS enables several, often
distant, developers to work together on the same source code.
PVCS is a document version control tool, a competitor of SCCS. SCCS is an original UNIX
program, based on "diff". Diff is a UNIX utility that compares the difference between two text files.

Q54: Which of these roles are the best and most popular?
A: In testing, Tester roles tend to be the most popular. The less popular roles include the roles of
System Administrator, Test/QA Team Lead, and Test/QA Managers.

Q55: What other roles are in testing?


A: Depending on the organization, the following roles are more or less standard on most testing
projects: Testers, Test Engineers, Test/QA Team Leads, Test/QA Managers, System
Administrators, Database Administrators, Technical Analysts, Test Build Managers, and Test
Configuration Managers.

Depending on the project, one person can and often wear more than one hat. For instance, we
Test Engineers often wear the hat of Technical Analyst, Test Build Manager and Test
Configuration Manager as well.

Q56: What's the difference between priority and severity?


A: The simple answer is, "Priority is about scheduling, and severity is about standards."

The complex answer is, "Priority means something is afforded or deserves prior attention; a
precedence established by order of importance (or urgency). Severity is the state or quality of
being severe; severe implies adherence to rigorous standards or high principles and often
suggests harshness; severe is marked by or requires strict adherence to rigorous standards or
high principles, e.g. a severe code of behavior."

Q57: What's the difference between efficient and effective?


A: "Efficient" means having a high ratio of output to input; working or producing with a minimum of
waste. For example, "An efficient test engineer wastes no time", or "An efficient engine saves
gas".

"Effective", on the other hand, means producing, or capable of producing, an intended result, or
having a striking effect. For example, "For automated testing, WinRunner is more effective than
an oscilloscope", or "For rapid long-distance transportation, the jet engine is more effective than a
witch's broomstick".

Q58: What is the difference between verification and validation?


A: Verification takes place before validation, and not vice versa. Verification evaluates
documents, plans, code, requirements, and specifications. Validation, on the other hand,
evaluates the product itself.

The inputs of verification are checklists, issues lists, walk-throughs and inspection meetings,
reviews and meetings. The input of validation, on the other hand, is the actual testing of an actual
product.
The output of verification is a nearly perfect set of documents, plans, specifications, and
requirements document. The output of validation, on the other hand, is a nearly perfect, actual
product

Q59: What is documentation change management?


A: Documentation change management is part of configuration management (CM). CM covers
the tools and processes used to control, coordinate and track code, requirements,
documentation, problems, change requests, designs, tools, compilers, libraries, patches, changes
made to them and who makes the changes.

Rob Davis has had experience with a full range of CM tools and concepts. Rob Davis can easily
adapt to your software tool and process needs.

Q60: What is up time?


A: "Up time" is the time period when a system is operational and in service. Up time is the sum of
busy time and idle time.

For example, if, out of 168 hours, a system has been busy for 50 hours, idle for 110 hours, and
down for 8 hours, then the busy time is 50 hours, idle time is 110 hours, and up time is (110 + 50
=) 160 hours.

Q61: What is upwardly compatible software?


A: Upwardly compatible software is compatible with a later or more complex version of itself. For
example, an upwardly compatible software is able to handle files created by a later version of
itself

Q62: What is upward compression?


A: In software design, upward compression means a form of demodularization, in which a
subordinate module is copied into the body of a superior module.

Q63: What is usability?


A: Usability means ease of use; the ease with which a user can learn to operate, prepare inputs
for, and interpret outputs of a software product.

Q64: What is user documentation?


A: User documentation is a document that describes the way a software product or system
should be used to obtain the desired results.
Q65: What is a user manual?
A: User manual is a document that presents information necessary to employ software or a
system to obtain the desired results.

Typically, what is described are system and component capabilities, limitations, options, permitted
inputs, expected outputs, error messages, and special instructions

Q66: What is the difference between user documentation and user


manual?
A: When a distinction is made between those who operate and use a computer system for its
intended purpose, a separate user documentation and user manual is created. Operators get
user documentation, and users get user manuals.

Q67: What is user friendly software?


A: A computer program is user friendly, when it is designed with ease of use, as one of the
primary objectives of its design.

Q68: What is a user friendly document?


A: A document is user friendly, when it is designed with ease of use, as one of the primary
objectives of its design.

Q69: What is a user guide?


A: User guide is the same as the user manual. It is a document that presents information
necessary to employ a system or component to obtain the desired results.

Typically, what is described are system and component capabilities, limitations, options, permitted
inputs, expected outputs, error messages, and special instructions.

Q70: What is user interface?


A: User interface is the interface between a human user and a computer system. It enables the
passage of information between a human user and hardware or software components of a
computer system.

Q71: What is a utility?


A: Utility is a software tool designed to perform some frequently used support function. For
example, a program to print files.
Q72: What is utilization?
A: Utilization is the ratio of time a system is busy, divided by the time it is available. Uilization is a
useful measure in evaluating computer performance.

Q73: What is V&V?


A: V&V is an acronym for verification and validation.

Q74: What is variable trace?


A: Variable trace is a record of the names and values of variables accessed and changed during
the execution of a computer program.

Q75: What is value trace?


A: Value trace is same as variable trace. It is a record of the names and values of variables
accessed and changed during the execution of a computer program.

Q76: What is a variable?


A: Variables are data items whose values can change. One example is a variable we've named
"capacitor_voltage_10000", where "capacitor_value_10000" can be any whole number between
-10000 and +10000.

Keep in mind, there are local and global variables.

Q77: What is a variant?


A: Variants are versions of a program. Variants result from the application of software diversity.

Q78: What is verification and validation (V&V)?


A: Verification and validation (V&V) is a process that helps to determine if the software
requirements are complete, correct; and if the software of each development phase fulfills the
requirements and conditions imposed by the previos phase; and if the final software complies
with the applicable software requirements.

Q79: What is a software version?


A: A software version is an initial release (or re-release) of a software associated with a complete
compilation (or recompilation) of the software.
Q80: What is a document version?
A: A document version is an initial release (or a complete re-release) of a document, as opposed
to a revision resulting from issuing change pages to a previous release.

Q81: What is VDD?


A: VDD is an acronym. It stands for "version description document".

Q82: What is a version description document (VDD)?


A: Version description document (VDD) is a document that accompanies and identifies a given
version of a software product.

Typically the VDD includes a description, and identification of the software, identification of
changes incorporated into this version, and installation and operating information unique to this
version of the software

Q83: What is a vertical microinstruction?


A: A vertical microinstruction is a microinstruction that specifies one of a sequence of operations
needed to carry out a machine language instruction. Vertical microinstructions are short, 12 to 24
bit instructions. They're called vertical because they are normally listed vertically on a page.
These 12 to 24 bit microinstructions instructions are required to carry out a single machine
language instruction.

Besides vertical microinstructions, there are horizontal as well as diagonal microinstructions as


well.

Q84: What is a virtual address?


A: In virtual storage systems, virtual addresses are assigned to auxiliary storage locations. They
allow those location to be accessed as though they were part of the main storage.

Q85: What is virtual memory?


A: Virtual memory relates to virtual storage. In virtual storage, portions of a user's program and
data are placed in auxiliary storage, and the operating system automatically swaps them in and
out of main storage as needed.

Q86: What is virtual storage?


A: Virtual storage is a storage allocation technique, in which auxiliary storage can be addressed
as though it was part of main storage. Portions of a user's program and data are placed in
auxiliary storage, and the operating system automatically swaps them in and out of main storage
as needed.

Q87: What is a waiver?


A: Waivers are authorizations to accept software that has been submitted for inspection, found to
depart from specified requirements, but is nevertheless considered suitable for use "as is", or
after rework by an approved method.

Q88: What is the waterfall model?


A: Waterfall is a model of the software development process in which the concept phase,
requirements phase, design phase, implementation phase, test phase, installation phase, and
checkout phase are performed in that order, probably with overlap, but with little or no iteration.

Q89: What are the phases of the software development process?


A: Software development process consists of the concept phase, requirements phase, design
phase, implementation phase, test phase, installation phase, and checkout phase.

Q90: What models are used in software development?


A: In software development process the following models are used: waterfall model, incremental
development model, rapid prototyping model, and spiral model.
Q91: What is SDLC?
A: SDLC is an acronym. It stands for "software development life cycle".

Q92: What is the difference between system testing and


integration testing?
A: System testing is high level testing, and integration testing is a lower level testing. Integration
testing is completed first, not the system testing. In other words, upon completion of integration
testing, system testing is started, and not vice versa.

For integration testing, test cases are developed with the express purpose of exercising the
interfaces between the components.

For system testing, on the other hand, the complete system is configured in a controlled
environment, and test cases are developed to simulate real life scenarios that occur in a
simulated real life test environment.

The purpose of integration testing is to ensure distinct components of the application still work in
accordance to customer requirements.

The purpose of system testing, on the other hand, is to validate an application's accuracy and
completeness in performing the functions as designed, and to test all functions of the system that
are required in real life

Q93: Can you give me more information on software QA/testing,


from a tester's point of view?
A: Yes, I can. You can visit my web site, and on pages robdavispe.com/free and
robdavispe.com/free2 you can find answers to many questions on software QA, documentation,
and software testing, from a tester's point of view. As to questions and answers that are not on my
web site now, please be patient, as I am going to add more answers, as soon as time permits.

Q94: What are the parameters of performance testing?


A: Performance testing verifies loads, volumes, and response times, as defined by requirements.
Performance testing is a part of system testing, but it is also a distinct level of testing.

The term 'performance testing' is often used synonymously with stress testing, load testing,
reliability testing, and volume testing.

Q95: What types of testing can you tell me about?


A: Each of the followings represents a different type of testing approach: black box testing, white
box testing, unit testing, incremental testing, integration testing, functional testing, system testing,
end-to-end testing, sanity testing, regression testing, acceptance testing, load testing,
performance testing, usability testing, install/uninstall testing, recovery testing, security testing,
compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison
testing, alpha testing, beta testing, and mutation testing.

Q96: What is disaster recovery testing?


A: Disaster recovery testing is testing how well the system recovers from disasters, crashes,
hardware failures, or other catastrophic problems.

Q97: How do you conduct peer reviews?


A: Peer reviews, sometimes called PDR, are formal meeting, more formalized than a walk-
through, and typically consists of 3-10 people including the test lead, task lead (the author of
whatever is being reviewed) and a facilitator (to make notes).

The subject of the PDR is typically a code block, release, or feature, or document. The purpose of
the PDR is to find problems and see what is missing, not to fix anything.

The result of the meeting is documented in a written report. Attendees should prepare for PDRs
by reading through documents, before the meeting starts; most problems are found during this
preparation.

Why are PDRs so useful? Because PDRs are cost-effective methods of ensuring quality, because
bug prevention is more cost effective than bug detection.
Q98: How do you test the password field?
A: To test the password field, we do boundary value testing.

Q99: How do you check the security of your application?


A: To check the security of an application, we can use security/penetration testing.
Security/penetration testing is testing how well the system is protected against unauthorized
internal or external access, or willful damage.

This type of testing usually requires sophisticated testing techniques.

Q100: When testing the password field, what is your focus?


A: When testing the password field, one needs to verify that passwords are encrypted.

Q101: What is the objective of regression testing?


A: The objective of regression testing is to test that the fixes have not created any other problems
elsewhere. In other words, the objective is to ensure the software has remained intact.

A baseline set of data and scripts are maintained and executed, to verify that changes introduced
during the release have not "undone" any previous code.

Expected results from the baseline are compared to results of the software under test. All
discrepancies are highlighted and accounted for, before testing proceeds to the next level.

Q102: What stage of bug fixing is the most cost effective?


A: Bug prevention, i.e. inspections, PDRs, and walk-throughs, is more cost effective than bug
detection.

Q103: What types of white box testing can you tell me about?
A: White box testing is a testing approach that examines the application's program structure, and
derives test cases from the application's program logic.

Clear box testing is a white box type of testing. Glass box testing is also a white box type of
testing. Open box testing is also a white box type of testing.

Q104: What black box testing types can you tell me about?
A: Black box testing is functional testing, not based on any knowledge of internal software design
or code.
Black box testing is based on requirements and functionality. Functional testing is also a black-
box type of testing geared to functional requirements of an application.

System testing is also a black box type of testing. Acceptance testing is also a black box type of
testing. Functional testing is also a black box type of testing. Closed box testing is also a black
box type of testing. Integration testing is also a black box type of testing

Q105: Is regression testing performed manually?


A: It depends on the initial testing approach. If the initial testing approach was manual testing,
then the regression testing is normally performed manually.

Conversely, if the initial testing approach was automated testing, then the regression testing is
normally performed by automated testing.

Q106: Give me others' FAQs on testing.


A

Q107: Can you share with me your knowledge of software testing?

Q108: How can I learn software testing?


.

Q109: What is your view of software QA/testing?


A: Software QA/testing is easy, if requirements are solid, clear, complete, detailed, cohesive,
attainable and testable, and if schedules are realistic, and if there is good communication.

Software QA/testing is a piece of cake, if project schedules are realistic, if adequate time is
allowed for planning, design, testing, bug fixing, re-testing, changes, and documentation.

Q110: What is your view of software QA/testing? (Cont'd...)


Software QA/testing is relatively easy, if testing is started early on, and if fixes or changes are re-
tested, and if sufficient time is planned for both testing and bug fixing.

Software QA/testing is easy, if new features are avoided, and if one sticks to initial requirements
as much as possible.

Q111: How can I be a good tester?


A: We, good testers, take the customers' point of view. We are tactful and diplomatic. We have a
"test to break" attitude, a strong desire for quality, an attention to detail, and good communication
skills, both oral and written.

Previous software development experience is also helpful as it provides a deeper understanding


of the software development process.

Q112: What is the difference between software bug and software


defect?
A: A 'software bug' is a nonspecific term that means an inexplicable defect, error, flaw, mistake,
failure, fault, or unwanted behavior of a computer program.

Other terms, e.g. software defect and software failure, are more specific.

There are many who believe the word 'bug' is a reference to insects that caused malfunctions in
early electromechanical computers (in the 1950s and 1960s), the truth is the word 'bug' has been
part of engineering jargon for 100+ years. Thomas Edison, the great inventor, wrote the followings
in 1878: "It has been just so in all of my inventions. The first step is an intuition, and comes with a
burst, then difficulties arise—this thing gives out and [it is] then that "Bugs" — as such little faults
and difficulties are called — show themselves and months of intense watching, study and labor
are requisite before commercial success or failure is certainly reached."

Q113: How can I improve my career in software QA/testing?


!

Q114: How do you compare two files?


A: Use PVCS, SCCS, or "diff". PVCS is a document version control tool, a competitor of SCCS.
SCCS is an original UNIX program, based on "diff". Diff is a UNIX utility that compares the
difference between two text files.

Q115: What do we use for comparison?


A: Generally speaking, when we write a software program to compare files, we compare two files,
bit by bit. For example, when we use "diff", a UNIX utility, we compare two text files.

Q116: What is the reason we compare files?


A: We compare files because of configuration management, revision control, requirement version
control, or document version control. Examples are Rational ClearCase, DOORS, PVCS, and
CVS. CVS, for example, enables several, often distant, developers to work together on the same
source code.

Q117: When is a process repeatable?


A: If we use detailed and well-written processes and procedures, we ensure the correct steps are
being executed. This facilitates a successful completion of a task. This is a way we also ensure a
process is repeatable.

Q118: What is test methodology?


A: One test methodology is a three-step process. Creating a test strategy, creating a test
plan/design, and executing tests. This methodology can be used and molded to your
organization's needs.

Rob Davis believes that using this methodology is important in the development and ongoing
maintenance of his customers' applications

Q119: What does a Test Strategy Document contain?


A: The test strategy document is a formal description of how a software product will be tested. A
test strategy is developed for all levels of testing, as required.

The test team analyzes the requirements, writes the test strategy and reviews the plan with the
project team.

The test plan may include test cases, conditions, the test environment, and a list of related tasks,
pass/fail criteria and risk assessment.

Additional sections in the test strategy document include:

A description of the required hardware and software components, including test tools. This
information comes from the test environment, including test tool data.

A description of roles and responsibilities of the resources required for the test and schedule
constraints. This information comes from man-hours and schedules.

Testing methodology. This is based on known standards.

Functional and technical requirements of the application. This information comes from
requirements, change request, technical, and functional design documents.

Requirements that the system cannot provide, e.g. system limitations.

Q120: How can I start my career in automated testing?


A: Number one: I suggest you read all you can, and that includes reading product description
pamphlets, manuals, books, information on the Internet, and whatever information you can lay
your hands on.

Two, get hands-on experience on how to use automated testing tools.

If there is a will, there is a way! You CAN do it, if you put your mind to it! You CAN learn to use
WinRunner, and many other automated testing tools, with little or no outside help. Click on a link!

Q121: What is monkey testing?


A: "Monkey testing" is random testing performed by automated testing tools. These automated
testing tools are considered "monkeys", if they work at random.

We call them "monkeys" because it is widely believed, if we allow six monkeys to pound on six
typewriters at random, for a million years, they will recreate all the works of Isaac Asimov.

There are "smart monkeys" and "dumb monkeys".

"Smart monkeys" are valuable for load and stress testing, and will find a significant number of
bugs, but they're also very expensive to develop.

"Dumb monkeys", on the other hand, are inexpensive to develop, are able to do some basic
testing, but they will find few bugs. However, the bugs "dumb monkeys" do find will be hangs and
crashes, i.e. the bugs you least want to have in your software product.

"Monkey testing" can be valuable, but they should not be your only testing.

Q122: What is stochastic testing?


A: Stochastic testing is the same as "monkey testing", but stochastic testing is a more technical
sounding name for the same testing process.

Stochastic testing is black box testing, random testing, performed by automated testing tools.
Stochastic testing is a series of random tests over time.

The software under test typically passes the individual tests, but our goal is to see if it can pass a
large series of the individual tests.

Q123: What is mutation testing?


A: In mutation testing, we create mutant software, we make mutant software to fail, and thus
demonstrate the adequacy of our test case.

When we create a set of mutant software, each mutant software differs from the original software
by one mutation, i.e. one single syntax change made to one of its program statements, i.e. each
mutant software contains only one single fault.

When we apply test cases to the original software and to the mutant software, we evaluate if our
test case is adequate.

Our test case is inadequate, if both the original software and all mutant software generate the
same output.

Our test case is adequate, if our test case detects faults, or, if, at least one mutant software
generates a different output than does the original software for our test case

Q124: What is PDR?


A: PDR is an acronym. In the world of software QA/testing, it stands for "peer design review", or
"peer review".

Q125: What is is good about PDRs?


A: PDRs are informal meetings, and I do like all informal meetings. PDRs make perfect sense,
because they're for the mutual benefit of you and your end client.

Your end client requires a PDR, because they work on a product, and want to come up with the
very best possible design and documentation.

Your end client requires you to have a PDR, because when you organize a PDR, you invite and
assemble the end client's best experts and encourage them to voice their concerns as to what
should or should not go into the design and documentation, and why.

When you're a developer, designer, author, or writer, it's also to your advantage to come up with
the best possible design and documentation.

Therefore you want to embrace the idea of the PDR, because holding a PDR gives you a
significant opportunity to invite and assemble the end client's best experts and make them work
for you for one hour, for your own benefit.

To come up with the best possible design and documentation, you want to encourage your end
client's experts to speak up and voice their concerns as to what should or should not go into your
design and documentation, and why.

Q126: Why is that my company requires a PDR?


A: Your company requires a PDR, because your company wants to be the owner of the very best
possible design and documentation. Your company requires a PDR, because when you organize
a PDR, you invite, assemble and encourage the company's best experts to voice their concerns
as to what should or should not go into your design and documentation, and why.

Remember, PDRs are not about you, but about design and documentation. Please don't be
negative; please do not assume your company is finding fault with your work, or distrusting you in
any way. There is a 90+ per cent probability your company wants you, likes you and trust you,
because you're a specialist, and because your company hired you after a long and careful
selection process.

Your company requires a PDR, because PDRs are useful and constructive. Just about everyone -
even corporate chief executive officers (CEOs) - attend PDRs from time to time. When a
corporate CEO attends a PDR, he has to listen for "feedback" from shareholders. When a CEO
attends a PDR, the meeting is called the "annual shareholders' meeting".

Q127: Give me a list of ten good things about PDRs!


A: Number 1: PDRs are easy, because all your meeting attendees are your co-workers and
friends.

Number 2: PDRs do produce results. With the help of your meeting attendees, PDRs help you
produce better designs and better documents than the ones you could come up with, without the
help of your meeting attendees

Q128: Give me a list of ten good things about PDRs! (Cont'd...)


Number 3: Preparation for PDRs helps a lot, but, in the worst case, if you had no time to read
every page of every document, it's still OK for you to show up at the PDR.

Number 4: It's technical expertise that counts the most, but many times you can influence your
group just as much, or even more so, if you're dominant or have good acting skills.

Number 5: PDRs are easy, because, even at the best and biggest companies, you can dominate
the meeting by being either very negative, or very bright and wise.

Number 6: It is easy to deliver gentle suggestions and constructive criticism. The brightest and
wisest meeting attendees are usually gentle on you; they deliver gentle suggestions that are
constructive, not destructive.

Number 7: You get many-many chances to express your ideas, every time a meeting attendee
asks you to justify why you wrote what you wrote.

Number 8: PDRs are effective, because there is no need to wait for anything or anyone; because
the attendees make decisions quickly (as to what errors are in your document). There is no
confusion either, because all the group's recommendations are clearly written down for you by the
PDR's facilitator.

Number 9: Your work goes faster, because the group itself is an independent decision making
authority. Your work gets done faster, because the group's decisions are subject to neither
oversight nor supervision.

Number 10: At PDRs, your meeting attendees are the very best experts anyone can find, and
they work for you, for FREE!

Q129: What is the Exit criteria?


A: "Exit criteria" is a checklist, sometimes known as the "PDR sign-off sheet", i.e. a list of peer
design review related tasks that have to be done by the facilitator or other attendees of the PDR,
during or near the conclusion of the PDR.

By having a checklist, and by going through a checklist, the facilitator can...

1. Verify that the attendees have inspected all the relevant documents and reports, and

2. Verify that all suggestions and recommendations for each issue have been recorded, and

3. Verify that all relevant facts of the meeting have been recorded.

The facilitator's checklist includes the following questions:

1. "Have we inspected all the relevant documents, code blocks, or products?"

2. "Have we completed all the required checklists?"

3. "Have I recorded all the facts relevant to this peer review?"

4. "Does anyone have any additional suggestions, recommendations, or comments?"

5. "What is the outcome of this peer review?" At the end of the peer review, the facilitator asks the
attendees of the peer review to make a decision as to the outcome of the peer review. I.e., "What
is our consensus?" "Are we accepting the design (or document or code)?"

Q130: Have you attended any review meetings?


A: Yes, in the last 10+ years I have attended many review meetings; mostly peer reviews. In
today's corporate world, the vast majority of review meetings are peer review meetings.

In my experience, the most useful peer reviews are the ones where you're the author of
something. Why? Because when you're the author, then it's you who decides what to do and how,
and it's you who receives all the free help.

In my experience, on the long run, the inputs of your additional reviewers and additional
attendees can be the most valuable to you and your company. But, in your own best interest, in
order to expedite things, before every peer review it is a good idea to get together with the
additional reviewer and additional attendee, and talk with them about issues, because if you don't,
they will be the ones with the largest number of questions and usually negative feedback.

When a PDR is done right, it is useful, beneficial, pleasant, and friendly. Generally speaking, the
fewer people show up at the PDR, the easier it tends to be, and the earlier it can be adjourned.

When you're an author, developer, or task lead, many times you can relax, because during your
peer review your facilitator and test lead are unlikely to ask any tough questions from you. Why?
Because, the facilitator is too busy taking notes, and the test lead is kind of bored (because he
had already asked his toughest questions before the PDR).

When you're a facilitator, every PDR tends to be a pleasant experience. In my experience, one of
the easiest review meetings are PDRs where you're the facilitator (whose only job is to call the
shots and make notes

Q131: What types of review meetings can you tell me about?


A: Of review meetings, peer design reviews are the most common. Peer design reviews are so
common that they tend to replace both inspections and walk-throughs.

Peer design reviews can be classified according to the 'subject' of the review. I.e., "Is this a
document review, design review, or code review?"

Peer design reviews can be classified according to the 'role' you play at the meeting. I.e., "Are
you the task lead, test lead, facilitator, moderator, or additional reviewer?"

Peer design reviews can be classified according to the 'job title of attendees. I.e., "Is this a
meeting of peers, managers, systems engineers, or system integration testers?"

Peer design reviews can be classified according to what is being reviewed at the meeting. I.e.,
"Are we reviewing the work of a developer, tester, engineer, or technical document writer?"

Peer design reviews can be classified according to the 'objective' of the review. I.e., "Is this
document for the file cabinets of our company, or that of the government (e.g. the FAA or FDA)?"

PDRs of government documents tend to attract the attention of managers, and the meeting
quickly becomes a meeting of managers.
Q132: How can I shift my focus and area of work from QC to QA?
A: Number one, focus on your strengths, skills, and abilities! Realize that there are MANY
similarities between Quality Control and Quality Assurance! Realize that you have MANY
transferable skills!

Number two, make a plan! Develop a belief that getting a job in QA is easy! HR professionals
cannot tell the difference between quality control and quality assurance! HR professionals tend to
respond to keywords (i.e. QC and QA), without knowing the exact meaning of those keywords!

Number three, make it a reality! Invest your time! Get some hands-on experience! Do some QA
work! Do any QA work, even if, for a few months, you get paid a little less than usual! Your goals,
beliefs, enthusiasm, and action will make a huge difference in your life!

Number four, I suggest you read all you can, and that includes reading product pamphlets,
manuals, books, information on the Internet, and whatever information you can lay your hands
on! If there is a will, there is a way! You CAN do it, if you put your mind to it! You CAN learn to do
QA work, with little or no outside help! Click on a link!

Q133: What techniques and tools can enable me to migrate from


QC to QA?
A: Technique number one is mental preparation. Understand and believe what you want is not
unusual at all! Develop a belief in yourself! Start believing what you want is attainable! You can
change your career! Every year, millions of men and women change their careers successfully!

Number two, make a plan! Develop a belief that getting a job in QA is easy! HR professionals
cannot tell the difference between quality control and quality assurance! HR professionals tend to
respond to keywords (i.e. QC and QA), without knowing the exact meaning of those keywords!

Q134: What techniques and tools can enable me to migrate from


QC to QA? (Cont'd...)
A: Number three, make it a reality! Invest your time! Get some hands-on experience! Do some
QA work! Do any QA work, even if, for a few months, you get paid a little less than usual! Your
goals, beliefs, enthusiasm, and action will make a huge difference in your life!

Number four, I suggest you read all you can, and that includes reading product pamphlets,
manuals, books, information on the Internet, and whatever information you can lay your hands
on!

If there is a will, there is a way! You CAN do it, if you put your mind to it! You CAN learn to do QA
work, with little or no outside help! Click on a link!

Q135: What is the difference between build and release?


A: Builds and releases are similar, because both builds and releases are end products of
software development processes. Builds and releases are similar, because both builds and
releases help developers and QA teams to deliver reliable software.
Build means a version of a software, typically one that is still in testing. Usually a version number
is given to a released product, but, sometimes, a build number is used instead.

Difference number one: Builds refer to software that is still in testing, release refers to software
that is usually no longer in testing.

Q136: What is the difference between build and release? (Cont'd...)

Difference number two: Builds occur more frequently; releases occur less frequently.

Difference number three: Versions are based on builds, and not vice versa. Builds, or usually a
series of builds, are generated first, as often as one build per every morning, depending on the
company, and then every release is based on a build, or several builds, i.e. the accumulated code
of several builds.

Q137: What is CMM?


A: CMM is an acronym that stands for Capability Maturity Model. The idea of CMM is, as to future
efforts in developing and testing software, concepts and experiences do not always point us in
the right direction, therefore we should develop processes, and then refine those processes.

There are five CMM levels, of which Level 5 is the highest...

CMM Level 1 is called "Initial".


CMM Level 2 is called "Repeatable".
CMM Level 3 is called "Defined".
CMM Level 4 is called "Managed".
CMM Level 5 is called "Optimized".

There are not many Level 5 companies; most hardly need to be. Within the United States, fewer
than 8% of software companies are rated CMM Level 4, or higher. The U.S. government requires
that all companies with federal government contracts to maintain a minimum of a CMM Level 3
assessment.

CMM assessments take two weeks. They're conducted by a nine-member team led by a SEI-
certified lead assessor

Q138: What are CMM levels and their definitions?


A: There are five CMM levels of which level 5 is the highest.

CMM level 1 is called "initial". The software process is at CMM level 1, if it is an ad hoc process.
At CMM level 1, few processes are defined, and success, in general, depends on individual effort
and heroism.

CMM level 2 is called "repeatable". The software process is at CMM level 2, if the subject
company has some basic project management processes, in order to track cost, schedule, and
functionality. Software processes are at CMM level 2, if necessary processes are in place, in
order to repeat earlier successes on projects with similar applications. Software processes are at
CMM level 2, if there are requirements management, project planning, project tracking,
subcontract management, QA, and configuration management.
CMM level 3 is called "defined". The software process is at CMM level 3, if the software process
is documented, standardized, and integrated into a standard software process for the subject
company. The software process is at CMM level 3, if all projects use approved, tailored versions
of the company's standard software process for developing and maintaining software. Software
processes are at CMM level 3, if there are process definition, training programs, process focus,
integrated software management, software product engineering, intergroup coordination, and
peer reviews.

CMM level 4 is called "managed". The software process is at CMM level 4, if the subject company
collects detailed data on the software process and product quality, and if both the software
process and the software products are quantitatively understood and controlled. Software
processes are at CMM level 4, if there are software quality management (SQM) and quantitative
process management.

Q139: What are CMM levels and their definitions? (Cont'd...)


CMM level 5 is called "optimized". The software process is at CMM level 5, if there is continuous
process improvement, if there is quantitative feedback from the process and from piloting
innovative ideas and technologies. Software processes are at CMM level 5, if there are process
change management, and defect prevention technology change management.

Q140: What is the difference between bug and defect in software


testing?
A: In software testing, the difference between bug and defect is small, and depends on your
company. For some companies, bug and defect are synonymous, while others believe bug is a
subset of defect.

Generally speaking, we, software test engineers, discover BOTH bugs and defects, before bugs
and defects damage the reputation of our company. We, QA engineers, use the software much
like real users would, to find BOTH bugs and defects, to find ways to replicate BOTH bugs and
defects, to submit bug reports to the developers, and to provide feedback to the developers, i.e.
tell them if they've achieved the desired level of quality. Therefore, we, software QA engineers, do
not differentiate between bugs and defects. In our bug reports, we include BOTH bugs and
defects, and any differences between them are minor.

Difference number one: In bug reports, the defects are usually easier to describe.

Difference number two: In bug reports, it is usually easier to write the descriptions on how to
replicate the defects. Defects tend to require brief explanations only.
Q141hat is grey box testing?
A: Grey box testing is a software testing technique that uses a combination of black box testing
and white box testing. Gray box testing is not black box testing, because the tester does know
some of the internal workings of the software under test.

In grey box testing, the tester applies a limited number of test cases to the internal workings of
the software under test. In the remaining part of the grey box testing, one takes a black box
approach in applying inputs to the software under test and observing the outputs.

Gray box testing is a powerful idea. The concept is simple; if one knows something about how the
product works on the inside, one can test it better, even from the outside.
Grey box testing is not to be confused with white box testing; i.e. a testing approach that attempts
to cover the internals of the product in detail. Grey box testing is a test strategy based partly on
internals.

The testing approach is known as gray box testing, when one does have some knowledge, but
not the full knowledge of the internals of the product one is testing.

In gray box testing, just as in black box testing, you test from the outside of a product, just as you
do with black box, but you make better-informed testing choices because you're better informed;
because you know how the underlying software components operate and interact.

Q142:what is the difference between version and release?


A: Both version and release indicate a particular point in the software development life cycle, or in
the life cycle of a document.

The two terms, version and release, are similar (i.e. mean pretty much the same thing), but there
are minor differences between them.

Version means a VARIATION of an earlier, or original, type; for example, "I've downloaded the
latest version of the software from the Internet. The latest version number is
3.3."

Release, on the other hand, is the ACT OR INSTANCE of issuing something for publication, use,
or distribution. Release is something thus released. For example, "A new release of a software
program."

Q143:what is data integrity?


A: Data integrity is one of the six fundamental components of information security.

Data integrity is the completeness, soundness, and wholeness of the data that also complies with
the intention of the creators of the data.

In databases, important data -- including customer information, order database, and pricing tables
-- may be stored.

In databases, data integrity is achieved by preventing accidental, or deliberate, or unauthorized


insertion, or modification, or destruction of data

Q144: How do you test data integrity?


A: Data integrity testing should verify the completeness, soundness, and wholeness of the stored
data.

Testing should be performed on a regular basis, because important data can and will change over
time.

Data integrity tests include the followings:

1. Verify that you can create, modify, and delete any data in tables.
2. Verify that sets of radio buttons represent fixed sets of values.

3. Verify that a blank value can be retrieved from the database.

4. Verify that, when a particular set of data is saved to the database, each value gets saved fully,
and the truncation of strings and rounding of numeric values do not occur.

5. Verify that the default values are saved in the database, if the user input is not specified.

6. Verify compatibility with old data, old hardware, versions of operating systems, and interfaces
with other software
Q145: What is data validity?
A: Data validity is the correctness and reasonablenesss of data. Reasonableness of data means,
for example, account numbers falling within a range, numeric data being all digits, dates having a
valid month, day and year, spelling of proper names.

Data validity errors are probably the most common, and the most difficult to detect, data-related
errors.

What causes data validity errors?

Data validity errors are usually caused by incorrect data entries, when a large volume of data is
entered in a short period of time.

For example, 12/25/2005 is entered as 13/25/2005 by mistake. This date is therefore invalid.

How can you reduce data validity errors? Use simple field validation rules.

Technique 1: If the date field in a database uses the MM/DD/YYYY format, then use a program
with the following two data validation rules: "MM should not exceed 12, and DD should not
exceed 31".

Technique 2: If the original figures do not seem to match the ones in the database, then use a
program to validate data fields. Compare the sum of the numbers in the database data field to the
original sum of numbers from the source. If there is a difference between the figures, it is an
indication of an error in at least one data element.

Q146: What is the difference between data validity and data


integrity?
A: Difference number one: Data validity is about the correctness and reasonableness of data,
while data integrity is about the completeness, soundness, and wholeness of the data that also
complies with the intention of the creators of the data.

Q147: What is the difference between data validity and data


integrity? (Cont'd...)
Difference number two: Data validity errors are more common, while data integrity errors are less
common.
Difference number three: Errors in data validity are caused by HUMANS -- usually data entry
personnel -- who enter, for example, 13/25/2005, by mistake, while errors in data integrity are
caused by BUGS in computer programs that, for example, cause the overwriting of some of the
data in the database, when one attempts to retrieve a blank value from the database.

Q148: What is TestDirector?


A: TestDirector, also known as Mercury TestDirector, is a software tool made for software QA
professionals. Mercury TestDirector, as the name implies, is the product of Mercury Interactive
Corporation, located at 379 North Whisman Road, Mountain View, California 94043 USA.

Mercury's products include the Mercury TestDirector®, Mercury QuickTest Professional™,


Mercury WinRunner™, and Mercury Business Process Testing™.

Q149: How I can improve my career in Software Testing, in


Banking?
A: Number one: Improve your attitude! Become the best Software Test Engineer! Always strive to
exceed the expectations of your customers!

Q150: How I can improve my career in software testing, in


banking? (Cont'd...)
Number two: Get an education! Sign up for courses at nearby educational institutes. Take
classes! Classroom education, especially non-degree courses in local community colleges, tends
to be inexpensive.

Number three: Get additional education, on the job, at the bank or financial institution where you
work. Free education is often provided by employers, while you are paid to do the job of a
Software Test Engineer.

On the job, oftentimes you can use some of the world's best software tools, including the Rational
Toolset, and there are many others. If your immediate manager is reluctant to train you on the job,
in order to do your job, then quietly find another banker, i.e. another employer, whose needs and
preferences are similar to yours.

Q151: Tell me about 'TestDirector'.


A: Made by Mercury Interactive, 'TestDirector' is a single browser-based application that
streamlines the software QA process. It is a software tool that helps software QA professionals to
gather requirements, to plan, schedule and run tests, and to manage and track
defects/issues/bugs.

TestDirector's Requirements Manager links test cases to requirements, ensures traceability, and
calculates what percentage of the requirements are covered by tests, how many of these tests
have been run, and how many have passed or failed.

As to planning, test plans can be created, or imported, for both manual and automated tests. The
test plans can then be reused, shared, and preserved. As to running tests, the TestDirector’s Test
Lab Manager allows you to schedule tests to run unattended, or run even overnight.

The TestDirector's Defect Manager supports the entire bug life cycle, from initial problem
detection through fixing the defect, and verifying the fix. Additionally, the TestDirector can create
customizable graphs and reports, including test execution reports and release status
assessments.

Q152: What is structural testing?


A: Structural testing is also known as clear box testing, glass box testing. Structural testing is a
way to test software with knowledge of the internal workings of the code being tested.

Structural testing is white box testing, not black box testing, since black boxes are considered
opaque and do not permit visibility into the code.

Q153: What is the difference between static and dynamic testing?


A: The differences between static and dynamic testing are as follows:

Difference number 1: Static testing is about prevention, dynamic testing is about cure.

Difference number 2: She static tools offer greater marginal benefits.

Difference number 3: Static testing is many times more cost-effective than dynamic testing.

Difference number 4: Static testing beats dynamic testing by a wide margin.

Difference number 5: Static testing is more effective!

Difference number 6: Static testing gives you comprehensive diagnostics for your code.

Difference number 7: Static testing achieves 100% statement coverage in a relatively short time,
while dynamic testing often often achieves less than 50% statement coverage, because dynamic
testing finds bugs only in parts of the code that are actually executed.

Difference number 8: Dynamic testing usually takes longer than static testing. Dynamic testing
may involve running several test cases, each of which may take longer than compilation.

Difference number 9: Dynamic testing finds fewer bugs than static testing.

Difference number 10: Static testing can be done before compilation, while dynamic testing can
take place only after compilation and linking.

Difference number 11: Static testing can find all of the following that dynamic testing cannot find:
syntax errors, code that is hard to maintain, code that is hard to test, code that does not conform
to coding standards, and ANSI violations.

Q154: What testing tools should I use?


A: Ideally, you should use both static and dynamic testing tools. To maximize software reliability,
you should use both static and dynamic techniques, supported by appropriate static and dynamic
testing tools.
Static and dynamic testing are complementary. Static and dynamic testing find different classes of
bugs. Some bugs are detectable only by static testing, some only by dynamic.

Dynamic testing does detect some errors that static testing misses. To eliminate as many errors
as possible, both static and dynamic testing should be used.

All this static testing (i.e. testing for syntax errors, testing for code that is hard to maintain, testing
for code that is hard to test, testing for code that does not conform to coding standards, and
testing for ANSI violations) takes place before compilation. Static testing takes roughly as long as
compilation and checks every statement you have written.

Q155: Why should I use static testing techniques?


A: You should use static testing techniques because static testing is a bargain, compared to
dynamic testing. Static testing is up to 100 times more effective. Even in selective testing, static
testing may be up to 10 times more effective. The most pessimistic estimates suggest a factor of
4.

Since static testing is faster and achieves 100% coverage, the unit cost of detecting these bugs
by static testing is many times lower than that of by dynamic testing.

Q156: Why should I use static testing techniques? (Cont'd...)


About half of the bugs, detectable by dynamic testing, can be detected earlier by static testing.

If you use neither static nor dynamic test tools, the static tools offer greater marginal benefits.

If urgent deadlines loom on the horizon, the use of dynamic testing tools can be omitted, but tool-
supported static testing should never be omitted.

Q157: How can I get registered and licensed as a professional


engineer?
A: To get registered and licensed as a professional engineer, generally you have to be a legal
resident of the jurisdiction where you submit your application.

You also have to be at least 18 years of age, trustworthy, with no criminal record. You also have
to have a minimum of a bachelor's degree in engineering, from an established, recognized, and
approved university.

Usually you have to provide two references, from licensed and professional engineers, and work
for a few years as an engineer, as an "engineer in training", under the supervision of a registered
and licensed professional engineer. You have to pass a test of competence in your engineering
discipline as well as in professional ethics.

For many candidates, the biggest two hurdles of getting a license seem to be the lack of a
university degree in engineering, or the lack of an acceptable, verifiable work experience, under
the supervision of a licensed, professional engineer.
Q158: I don't have any experience. How can I get my first
experience?
A: I see MANY possibilities.

Possibility number 1: Work for a company as a technician, preferably at a small company, or a


company that promotes from within. Once hired, work your way up to the test bench, and you
WILL get your first experience!

Possibility number 2: Know someone, and you WILL get your first job!

Possibility number 3: Sell yourself well! If you are confident, you WILL get your first job! Make
yourself shine, and the job will fall in your lap!

Possibility number 4: Speak to a manager, make a good impression, and you WILL get your first
job!

Possibility number 5: Attend a school of good reputation. If your prospective boss is familiar with
the school, you WILL get your first job!

Possibility number 6: Attend a school that offers job placement, with a real record of job
placement assistance. Then do what they say, and then you WILL get your first
job!

Possibility number 7: Believe in yourself, be confident, and you WILL get your first job!

Possibility number 8: Ask employment agencies. They usually keep in touch with various
companies. Sometimes they're friends with managers. Other times they're unusually well-
informed. They will help you to get your first job!

Q159: I don't have any experience. How can I get my first


experience? (Cont'd...)
Possibility number 9: Work for a company as a volunteer, i.e. an employee without pay. Once
hired, you WILL get your first experience!

Possibility number 10: Get your first job by training yourself. Training yourself on a PC (or Mac),
with the proper software, can be useful, if you spend your time to use it to its maximum potential!
You can get more information! You can get the information now, right now! Click on a link!

Q160: What is the definition of top down design?


A: Top down design progresses from simple design to detailed design. Top down design solves
problems by breaking them down into smaller, easier to solve subproblems. Top down design
creates solutions to these smaller problems, and then tests them using test drivers.

In other words, top down design starts the design process with the main module or system, then
progresses down to lower level modules and subsystems.

To put it differently, top down design looks at the whole system, and then explodes it into
subsystems, or smaller parts. A systems engineer or systems analyst determines what the top
level objectives are, and how they can be met. He then divides the system into subsystems, i.e.
breaks the whole system into logical, manageable-size modules, and deals with them individually.

Q161: What are the future prospects of software QA/testing?


A: In many IT-related occupations, employers want to see an increasingly broader range of skills;
often non-technical skills. In software QA/testing, for example, employers want us to have a
combination of technical, business, and personal skills.

Technical skills mean skills in IT, quantitative analysis, data modeling, and technical writing.
Business skills mean skills in strategy and business writing. Personal skills mean personal
communication, leadership, teamwork, and problem-solving skills.

We, employees, on the other hand, want increasingly more autonomy, better lifestyle, increasingly
more employee oriented company culture, and better geographic location. We will continue to
enjoy relatively good job security and, depending on the business cycle, many job opportunities
as well.

We realize our skills are important, and have strong incentives to upgrade our skills, although
sometimes lack the information on how to do so. Educational institutions are increasingly more
likely to ensure that we are exposed to real-life situations and problems, but high turnover rates
and a rapid pace of change in the IT industry will often act as strong disincentives for employers
to invest in our skills, especially non-company specific skills. Employers will continue to establish
closer links with educational institutions, both through in-house education programs and human
resources.

The share of IT workers with IT degrees will keep increasing. Certification will continue to keep
helping employers to quickly identify us with the latest skills. During boom times, smaller and
younger companies will continue to be the most attractive to us, especially those companies that
offer stock options and performance bonuses in order to retain and attract those of us who are
most skilled.

Q162: What are the future prospects of software QA/testing?


(Cont'd...)
High turnover rates will continue to be the norm, especially during boom. Software QA/Testing will
continue to be outsourced to offshore locations. Software QA/testing will continue to be
performed by a disproportionate share of men, but the share of women will increase.

Q163: How can I be effective and efficient, when I do black box


testing of e-commerce web sites?
A: When you're doing black box testing of e-commerce web sites, you're most efficient and
effective when you're testing the sites' Visual Appeal, Contents, and Home Pages. When you
want to be effective and efficient, you need to verify that the site is well planned.

Verify that the site is customer-friendly. Verify that the choices of colors are attractive. Verify that
the choices of fonts are attractive. Verify that the site's audio is customer friendly. Verify that the
site's video is attractive. Verify that the choice of graphics is attractive. Verify that every page of
the site is displayed properly on all the popular browsers. Verify the authenticity of facts.

Ensure the site provides reliable and consistent information. Test the site for appearance. Test the
site for grammatical and spelling errors. Test the site for visual appeal, choice of browsers,
consistency of font size, download time, broken links, missing links, incorrect links, and browser
compatibility. Test each toolbar, each menu item, every window, every field prompt, every pop-up
text, and every error message.

Test every page of the site for left and right justifications, every shortcut key, each control, each
push button, every radio button, and each item on every drop-down menu. Test each list box, and
each help menu item. Also check, if the command buttons are grayed out when they're not in use.

Q164: What is a backward compatible design?


A: The design is backward compatible, if the design continues to work with earlier versions of a
language, program, code, or software.

When the design is backward compatible, the signals or data that had to be changed, did not
break the existing code.

For instance, our mythical web designer decides that the fun of using Javascript and Flash is
more important than backward compatible design, or, he decides that he doesn't have the
resources to maintain multiple styles of backward compatible web design.

This decision of his will inconvenience some users, because some of the earlier versions of
Internet Explorer and Netscape will not display his web pages properly, as there are some serious
improvements in the newer versions of Internet Explorer and Netscape that make the older
versions of these browsers incompatible with, for example, DHTML.

This is when we say, "This design doesn't continue to work with earlier versions of browser
software. Therefore our mythical designer's web design is not backward compatible".

On the other hand, if the same mythical web designer decides that backward compatibility is
more important than fun, or, if he decides that he has the resources to maintain multiple styles of
backward compatible code, then no user will be inconvenienced.

No one will be inconvenienced, even when Microsoft and Netscape make some serious
improvements in their web browsers.

This is when we can say, "Our mythical web designer's design is backward compatible

Q164: What is the difference between top down and bottom up


design?
A: Top down design proceeds from the abstract (entity) to get to the concrete (design). Bottom up
design proceeds from the concrete (design) to get to the abstract (entity).

Top down design is most often used in designing brand new systems, while bottom up design is
sometimes used when one is reverse engineering a design; i.e. when one is trying to figure out
what somebody else designed in an existing system.

Bottom up design begins the design with the lowest level modules or subsystems, and
progresses upward to the main program, module, or subsystem.

With bottom up design, a structure chart is necessary to determine the order of execution, and
the development of drivers is necessary to complete the bottom up approach.
Top down design, on the other hand, begins the design with the main or top-level module, and
progresses downward to the lowest level modules or subsystems.

Real life sometimes is a combination of top down design and bottom up design.

For instance, data modeling sessions tend to be iterative, bouncing back and forth between top
down and bottom up modes, as the need arises.
Q165: What is the defintion of bottom up design?
A: Bottom up design begins the design at the lowest level modules or subsystems, and
progresses upward to the design of the main program, main module, or main subsystem.

To determine the order of execution, a structure chart is needed, and, to complete the bottom up
design, the development of drivers is needed.

In software design - assuming that the data you start with is a pretty good model of what you're
trying to do - bottom up design generally starts with the known data (e.g. customer lists, order
forms), then the data is broken into into chunks (i.e. entities) appropriate for planning a relational
database.

This process reveals what relationships the entities have, and what the entities' attributes are.

In software design, bottom up design doesn't only mean writing the program in a different order,
but there is more to it. When you design bottom up, you often end up with a different program.
Instead of a single, monolithic program, you get a larger language, with more abstract operators,
and a smaller program written in it.

Once you abstract out the parts which are merely utilities, what is left is much shorter program.
The higher you build up the language, the less distance you will have to travel down to it, from the
top. Bottom up design makes it easy to reuse code blocks.

For example, many of the utilities you write for one program are also useful for programs you
have to write later. Bottom up design also makes programs easier to read.

Q166: What is smoke testing?


A: Smoke testing is a relatively simple check to see whether the product "smokes" when it runs.
Smoke testing is also known as ad hoc testing, i.e. testing without a formal test plan.

With many projects, smoke testing is carried out in addition to formal testing. If smoke testing is
carried out by a skilled tester, it can often find problems that are not caught during regular testing.

Sometimes, if testing occurs very early or very late in the software development cycle, this can be
the only kind of testing that can be performed.

Smoke tests are, by definition, not exhaustive, but, over time, you can increase your coverage of
smoke testing.

A common practice at Microsoft, and some other software companies, is the daily build and
smoke test process. This means, every file is compiled, linked, and combined into an executable
file every single day, and then the software is smoke tested.

Smoke testing minimizes integration risk, reduces the risk of low quality, supports easier defect
diagnosis, and improves morale.

Smoke testing does not have to be exhaustive, but should expose any major problems. Smoke
testing should be thorough enough that, if it passes, the tester can assume the product is stable
enough to be tested more thoroughly.

Without smoke testing, the daily build is just a time wasting exercise. Smoke testing is the sentry
that guards against any errors in development and future problems during integration.

At first, smoke testing might be the testing of something that is easy to test. Then, as the system
grows, smoke testing should expand and grow, from a few seconds to 30 minutes or more.

Q167: What is the difference between monkey testing and smoke


testing?
A: Difference number 1: Monkey testing is random testing, and smoke testing is a nonrandom
check to see whether the product "smokes" when it runs. Smoke testing is nonrandom testing
that deliberately exercises the entire system from end to end, with the the goal of exposing any
major problems.

Difference number 2: Monkey testing is performed by automated testing tools. On the other hand,
smoke testing, more often than not, is a manual check to see whether the product "smokes" when
it runs.

Difference number 3: Monkey testing is performed by "monkeys", while smoke testing is


performed by skilled testers (to see whether the product "smokes" when it runs).

Difference number 4: "Smart monkeys" are valuable for load and stress testing, but not very
valuable for smoke testing, because they are too expensive for smoke testing.

Difference number 5: "Dumb monkeys" are inexpensive to develop, are able to do some basic
testing, but, if we use them for smoke testing, they find few bugs during smoke testing.

Difference number 6: Monkey testing is not a thorough testing, but smoke testing is thorough
enough that, if the build passes, one can assume that the program is stable enough to be tested
more thoroughly.

Difference number 7: Monkey testing does not evolve. Smoke testing, on the other hand, evolves
as the system evolves from something simple to something more thorough.

Difference number 8: Monkey testing takes "six monkeys" and a "million years" to run. Smoke
testing, on the other hand, takes much less time to run, i.e. anywhere from a few seconds to a
couple of hours.

Q168: Tell me about the process of daily builds and smoke tests.
A: The idea behind the process of daily builds and smoke tests is to build the product every day,
and test it every day.

The software development process at Microsoft and many other software companies requires
daily builds and smoke tests. According to their process, every day, every single file has to be
compiled, linked, and combined into an executable program. And, then, the program has to be
"smoke tested".
Smoke testing is a relatively simple check to see whether the product "smokes" when it runs.

You should add revisions to the build only when it makes sense to do so. You should to establish
a Build Group, and build *daily*; set your *own standard* for what constitutes "breaking the build",
and create a penalty for breaking the build, and check for broken builds *every day*.

In addition to the daily builds, you should smoke test the builds, and smoke test them Daily. You
should make the smoke test Evolve, as the system evolves. You should build and smoke test
Daily, even when the project is under pressure.

Think about the many benefits of this process! The process of daily builds and smoke tests
minimizes the integration risk, reduces the risk of low quality, supports easier defect diagnosis,
improves morale, enforces discipline, and keeps pressure-cooker projects on track.

If you build and smoke test *daily*, success will come, even when you're working on large
projects

Q169: I have no experience. How can I get a job?


A: There are many who might say, "I need experience to get a job. But, how can I get the
experience, if I cannot get a job?"

The good thing is, when you want a QA Tester job, there are MANY possibilities!

Possibility number 1: Get a job with a company at a lower level, perhaps as a technician,
preferably at a small company, or a company that promotes from within. Once you're hired, work
your way up to the test bench, and you WILL get your first QA Tester experience!

Possibility number 2: Attend a school of good reputation. If your prospective boss is familiar with
your school, you will get your first job!

Possibility number 3: Attend a school that offers job placement, with a real record of job
placement assistance, and do what they say, and you WILL get your first job!

Possibility number 4: Work for a company as a volunteer, i.e. employee without pay. Once you're
hired, you WILL get your first experience!

Possibility number 5: Get your first job by training yourself. Training yourself on a PC with the
proper manual and automated testing tools, can be useful, if you spend your time to use it to its
maximum potential! Get some hands-on experience on how to use manual and automated testing
tools.

If there is a will, there is a way! You CAN do it, if you put your mind to it! You CAN learn to use
WinRunner and many other automated testing tools, with little or no outside help. Click on a link!

Q170: What is the purpose of test strategy?


A: Reason number 1: The number one reason of writing a test strategy document is to "have" a
signed, sealed, and delivered, FDA (or FAA) approved document, where the document includes a
written testing methodology, test plan, and test cases.

Reason number 2: Having a test strategy does satisfy one important step in the software testing
process.

Reason number 3: The test strategy document tells us how the software product will be tested.
Reason number 4: The creation of a test strategy document presents an opportunity to review the
test plan with the project team.

Reason number 5: The test strategy document describes the roles, responsibilities, and the
resources required for the test and schedule constraints.

Reason number 6: When we create a test strategy document, we have to put into writing any
testing issues requiring resolution (and usually this means additional negotiation at the project
management level).

Reason number 7: The test strategy is decided first, before lower level decisions are made on the
test plan, test design, and other testing issues