You are on page 1of 91

GUI Testing Checklist (1)

by Karthik Ramanathan

Section 1 - Windows Compliance Testing

1.1. Application

Start Application by Double Clicking on its ICON. The Loading message should show the application name,
version number, and a bigger pictorial representation of the icon (a 'splash' screen).
No Login is necessary
The main window of the application should have the same caption as the caption of the icon in Program Manager.
Closing the application should result in an "Are you Sure" message box
Attempt to start application Twice
This should not be allowed - you should be returned to main Window
Try to start the application twice as it is loading.
On each window, if the application is busy, then the hour glass should be displayed. If there is no hour glass
(e.g. alpha access enquiries) then some enquiry in progress message should be displayed.
All screens should have a Help button, F1 should work doing the same.

1.2. For Each Window in the Application


If Window has a Minimise Button, click it.

Window Should return to an icon on the bottom of the screen


This icon should correspond to the Original Icon under Program Manager.
Double Click the Icon to return the Window to its original size.
The window caption for every application should have the name of the application and the window name -
especially the error messages. These should be checked for spelling, English and clarity , especially on the top
of the screen. Check does the title of the window makes sense.

If the screen has an Control menu, then use all ungreyed options. (see below)

Check all text on window for Spelling/Tense and Grammar


Use TAB to move focus around the Window. Use SHIFT+TAB to move focus backwards.
Tab order should be left to right, and Up to Down within a group box on the screen. All controls
should get focus - indicated by dotted box, or cursor. Tabbing to an entry field with text in it should highlight
the entire text in the field.
The text in the Micro Help line should change - Check for spelling, clarity and non-updateable etc.
If a field is disabled (greyed) then it should not get focus. It should not be possible to select them with either
the mouse or by using TAB. Try this for every greyed control.
Never updateable fields should be displayed with black text on a grey background with a black label.
All text should be left-justified, followed by a colon tight to it.
In a field that may or may not be updateable, the label text and contents changes from black to grey depending
on the current status.
List boxes are always white background with black text whether they are disabled or not. All others are grey.
In general, do not use goto screens, use gosub, i.e. if a button causes another screen to be displayed, the
screen should not hide the first screen, with the exception of tab in 2.0
When returning return to the first screen cleanly i.e. no other screens/applications should appear.
In general, double-clicking is not essential. In general, everything can be done using both the mouse and
the keyboard.
All tab buttons should have a distinct letter.

1.3. Text Boxes

Move the Mouse Cursor over all Enterable Text Boxes. Cursor should change from arrow to Insert Bar.
If it doesn't then the text in the box should be grey or non-updateable. Refer to previous page.
Enter text into Box
Try to overflow the text by typing to many characters - should be stopped Check the field width with capitals W.
Enter invalid characters - Letters in amount fields, try strange characters like + , - * etc. in All fields.
SHIFT and Arrow should Select Characters. Selection should also be possible with mouse. Double Click should
select all text in box.
1.4. Option (Radio Buttons)

Left and Right arrows should move 'ON' Selection. So should Up and Down.. Select with mouse by clicking.
1.5. Check Boxes

Clicking with the mouse on the box, or on the text should SET/UNSET the box. SPACE should do the same.

1.6. Command Buttons

If Command Button leads to another Screen, and if the user can enter or change details on the other screen then
the Text on the button should be followed by three dots.
All Buttons except for OK and Cancel should have a letter Access to them. This is indicated by a letter underlined
in the button text. The button should be activated by pressing ALT+Letter. Make sure there is no duplication.
Click each button once with the mouse - This should activate
Tab to each button - Press SPACE - This should activate
Tab to each button - Press RETURN - This should activate
The above are VERY IMPORTANT, and should be done for EVERY command Button.
Tab to another type of control (not a command button). One button on the screen should be default (indicated by
a thick black border). Pressing Return in ANY no command button control should activate it.
If there is a Cancel Button on the screen , then pressing <Esc> should activate it.
If pressing the Command button results in uncorrectable data e.g. closing an action step, there should be a message
phrased positively with Yes/No answers where Yes results in the completion of the action.
1.7. Drop Down List Boxes

Pressing the Arrow should give list of options. This List may be scrollable. You should not be able to type text
in the box.
Pressing a letter should bring you to the first item in the list with that start with that letter. Pressing ‘Ctrl - F4’
should open/drop down the list box.
Spacing should be compatible with the existing windows spacing (word etc.). Items should be in alphabetical
order with the exception of blank/none which is at the top or the bottom of the list box.
Drop down with the item selected should be display the list with the selected item on the top.
Make sure only one space appears, shouldn't have a blank line at the bottom.
1.8. Combo Boxes

Should allow text to be entered. Clicking Arrow should allow user to choose from list
1.9. List Boxes

Should allow a single selection to be chosen, by clicking with the mouse, or using the Up and Down Arrow keys.
Pressing a letter should take you to the first item in the list starting with that letter.
If there is a 'View' or 'Open' button beside the list box then double clicking on a line in the List Box, should act in the same way as selecting and item in the list box,
then clicking the command button.
Force the scroll bar to appear, make sure all the data can be seen in the box
Section 2 - Screen Validation Checklist

2.1. Aesthetic Conditions:

1. Is the general screen background the correct colour?


2. Are the field prompts the correct colour?
3. Are the field backgrounds the correct colour?
4. In read-only mode, are the field prompts the correct colour?
5. In read-only mode, are the field backgrounds the correct colour?
6. Are all the screen prompts specified in the correct screen font?
7. Is the text in all fields specified in the correct screen font?
8. Are all the field prompts aligned perfectly on the screen?
9. Are all the field edit boxes aligned perfectly on the screen?
10. Are all groupboxes aligned correctly on the screen?
11. Should the screen be resizable?
12. Should the screen be minimisable?
13. Are all the field prompts spelt correctly?
14. Are all character or alpha-numeric fields left justified? This is the default unless otherwise specified.
15. Are all numeric fields right justified? This is the default unless otherwise specified.
16. Is all the microhelp text spelt correctly on this screen?
17. Is all the error message text spelt correctly on this screen?
18. Is all user input captured in UPPER case or lower case consistently?
19. Where the database requires a value (other than null) then this should be defaulted into fields. The
user must either enter an alternative valid value or leave the default value intact.
20. Assure that all windows have a consistent look and feel.
21. Assure that all dialog boxes have a consistent look and feel.

2.2. Validation Conditions:

1. Does a failure of validation on every field cause a sensible user error message?
2. Is the user required to fix entries which have failed validation tests?
3. Have any fields got multiple validation rules and if so are all rules being applied?
4. If the user enters an invalid value and clicks on the OK button (i.e. does not TAB off the field) is the invalid entry identified and highlighted correctly with
an error message.?
5. Is validation consistently applied at screen level unless specifically required at field level?
6. For all numeric fields check whether negative numbers can and should be able to be entered.
7. For all numeric fields check the minimum and maximum values and also some mid-range values allowable?
8. For all character/alphanumeric fields check the field to ensure that there is a character limit specified and that this limit is exactly correct for the specified
database size?
9. Do all mandatory fields require user input?
10. If any of the database columns don't allow null values then the corresponding screen fields must be mandatory. (If any field which initially was mandatory
has become optional then check whether null values are allowed in this field.)

2.3. Navigation Conditions:


1. Can the screen be accessed correctly from the menu?
2. Can the screen be accessed correctly from the toolbar?
3. Can the screen be accessed correctly by double clicking on a list control on the previous screen?
4. Can all screens accessible via buttons on this screen be accessed correctly?
5. Can all screens accessible by double clicking on a list control be accessed correctly?
6. Is the screen modal. i.e. Is the user prevented from accessing other functions when this screen is active and is this correct?
7. Can a number of instances of this screen be opened at the same time and is this correct?

2.4. Usability Conditions:

1. Are all the dropdowns on this screen sorted correctly? Alphabetic sorting is the default unless otherwise specified.
2. Is all date entry required in the correct format?
3. Have all pushbuttons on the screen been given appropriate Shortcut keys?
4. Do the Shortcut keys work correctly?
5. Have the menu options which apply to your screen got fast keys associated and should they have?
6. Does the Tab Order specified on the screen go in sequence from Top Left to bottom right? This is the default unless otherwise specified.
7. Are all read-only fields avoided in the TAB sequence?
8. Are all disabled fields avoided in the TAB sequence?
9. Can the cursor be placed in the microhelp text box by clicking on the text box with the mouse?
10. Can the cursor be placed in read-only fields by clicking in the field with the mouse?
11. Is the cursor positioned in the first input field or control when the screen is opened?
12. Is there a default button specified on the screen?
13. Does the default button work correctly?
14. When an error message occurs does the focus return to the field in error when the user cancels it?
15. When the user Alt+Tab's to another application does this have any impact on the screen upon return to The application?
16. Do all the fields edit boxes indicate the number of characters they will hold by there length? e.g. a 30 character field should be a lot longer

2.5. Data Integrity Conditions:

1. Is the data saved when the window is closed by double clicking on the close box?
2. Check the maximum field lengths to ensure that there are no truncated characters?
3. Where the database requires a value (other than null) then this should be defaulted into fields. The user must either enter an alternative valid value or
leave the default value intact.
4. Check maximum and minimum field values for numeric fields?
5. If numeric fields accept negative values can these be stored correctly on the database and does it make sense for the field to accept negative numbers?
6. If a set of radio buttons represent a fixed set of values such as A, B and C then what happens if a blank value is retrieved from the database? (In some
situations rows can be created on the database by other functions which are not screen based and thus the required initial values can be incorrect.)
7. If a particular set of data is saved to the database check that each value gets saved fully to the database. i.e. Beware of truncation (of strings) and
rounding of numeric values.

2.6. Modes (Editable Read-only) Conditions:

1. Are the screen and field colours adjusted correctly for read-only mode?
2. Should a read-only mode be provided for this screen?
3. Are all fields and controls disabled in read-only mode?
4. Can the screen be accessed from the previous screen/menu/toolbar in read-only mode?
5. Can all screens available from this screen be accessed in read-only mode?
6. Check that no validation is performed in read-only mode.

2.7. General Conditions:


1. Assure the existence of the "Help" menu.
2. Assure that the proper commands and options are in each menu.
3. Assure that all buttons on all tool bars have a corresponding key commands.
4. Assure that each menu command has an alternative(hot-key) key sequence which will invoke it where appropriate.
5. In drop down list boxes, ensure that the names are not abbreviations / cut short
6. In drop down list boxes, assure that the list and each entry in the list can be accessed via appropriate key / hot key combinations.
7. Ensure that duplicate hot keys do not exist on each screen
8. Ensure the proper usage of the escape key (which is to undo any changes that have been made) and generates a caution message "Changes will be
lost - Continue yes/no"
9. Assure that the cancel button functions the same as the escape key.
10. Assure that the Cancel button operates as a Close button when changes have be made that cannot be undone.
11. Assure that only command buttons which are used by a particular window, or in a particular dialog box, are present. - i.e make sure they don't work on
the screen behind the current screen.
12. When a command button is used sometimes and not at other times, assure that it is grayed out when it should not be used.
13. Assure that OK and Cancel buttons are grouped separately from other command buttons.
14. Assure that command button names are not abbreviations.
15. Assure that all field labels/names are not technical labels, but rather are names meaningful to system users.
16. Assure that command buttons are all of similar size and shape, and same font & font size.
17. Assure that each command button can be accessed via a hot key combination.
18. Assure that command buttons in the same window/dialog box do not have duplicate hot keys.
19. Assure that each window/dialog box has a clearly marked default value (command button, or other object) which is invoked when the Enter key is
pressed - and NOT the Cancel or Close button
20. Assure that focus is set to an object/button which makes sense according to the function of the window/dialog box.
21. Assure that all option buttons (and radio buttons) names are not abbreviations.
22. Assure that option button names are not technical labels, but rather are names meaningful to system users.
23. If hot keys are used to access option buttons, assure that duplicate hot keys do not exist in the same window/dialog box.
24. Assure that option box names are not abbreviations.
25. Assure that option boxes, option buttons, and command buttons are logically grouped together in clearly demarcated areas "Group Box"
26. Assure that the Tab key sequence which traverses the screens does so in a logical way.
27. Assure consistency of mouse actions across windows.
28. Assure that the color red is not used to highlight active objects (many individuals are red-green color blind).
29. Assure that the user will have control of the desktop with respect to general color and highlighting (the application should not dictate the desktop
background characteristics).
30. Assure that the screen/window does not have a cluttered appearance
31. Ctrl + F6 opens next tab within tabbed window
32. Shift + Ctrl + F6 opens previous tab within tabbed window
33. Tabbing will open next tab within tabbed window if on last field of current tab
34. Tabbing will go onto the 'Continue' button if on last field of last tab within tabbed window
35. Tabbing will go onto the next editable field in the window
36. Banner style & size & display exact same as existing windows
37. If 8 or less options in a list box, display all options on open of list box - should be no need to scroll
38. Errors on continue will cause user to be returned to the tab and the focus should be on the field causing the error. (i.e the tab is opened, highlighting the
field with the error on it)
39. Pressing continue while on the first tab of a tabbed window (assuming all fields filled correctly) will not open all the tabs.
40. On open of tab focus will be on first editable field
41. All fonts to be the same
42. Alt+F4 will close the tabbed window and return you to main screen or previous screen (as appropriate), generating "changes will be lost" message if
necessary.
43. Microhelp text for every enabled field & button
44. Ensure all fields are disabled in read-only mode
45. Progress messages on load of tabbed screens
46. Return operates continue
47. If retrieve on load of tabbed window fails window should not open

2.8. Specific Field Tests

2.8.1. Date Field Checks

• Assure that leap years are validated correctly & do not cause errors/miscalculations
• Assure that month code 00 and 13 are validated correctly & do not cause errors/miscalculations
• Assure that 00 and 13 are reported as errors
• Assure that day values 00 and 32 are validated correctly & do not cause errors/miscalculations
• Assure that Feb. 28, 29, 30 are validated correctly & do not cause errors/ miscalculations
• Assure that Feb. 30 is reported as an error
• Assure that century change is validated correctly & does not cause errors/ miscalculations
• Assure that out of cycle dates are validated correctly & do not cause errors/miscalculations

2.8.2. Numeric Fields

• Assure that lowest and highest values are handled correctly


• Assure that invalid values are logged and reported
• Assure that valid values are handles by the correct procedure
• Assure that numeric fields with a blank in position 1 are processed or reported as an error
• Assure that fields with a blank in the last position are processed or reported as an error an error
• Assure that both + and - values are correctly processed
• Assure that division by zero does not occur
• Include value zero in all calculations
• Include at least one in-range value
• Include maximum and minimum range values
• Include out of range values above the maximum and below the minimum
• Assure that upper and lower values in ranges are handled correctly

2.8.3. Alpha Field Checks

• Use blank and non-blank data


• Include lowest and highest values
• Include invalid characters & symbols
• Include valid characters
• Include data items with first position blank
• Include data items with last position blank

Section 3 - Validation Testing - Standard Actions

3.1. Examples of Standard Actions - Substitute your specific commands

Add
View
Change
Delete
Continue - (i.e. continue saving changes or additions)

Add
View
Change
Delete
Cancel - (i.e. abandon changes or additions)

Fill each field - Valid data


Fill each field - Invalid data

Different Check Box / Radio Box combinations


Scroll Lists / Drop Down List Boxes
Help
Fill Lists and Scroll
Tab
Tab Sequence
Shift Tab

3.2. Shortcut keys / Hot Keys


Note: The following keys are used in some windows applications, and are included as a guide.

Key No Modifier Shift CTRL ALT

F1 Help Enter Help Mode n\a n\a

F2 n\a n\a n\a n\a

F3 n\a n\a n\a n\a

F4 n\a n\a Close Document / Child Close Application.


window.

F5 n\a n\a n\a n\a

F6 n\a n\a n\a n\a

F7 n\a n\a n\a n\a

F8 Toggle extend mode, if Toggle Add mode, if n\a n\a


supported. supported.

F9 n\a n\a n\a n\a

F10 Toggle menu bar n\a n\a n\a


activation.
F11, F12 n\a n\a n\a n\a

Tab Move to next Move to previous Move to next open Switch to previously used
active/editable field. active/editable field. Document or Child application. (Holding
window. (Adding SHIFT down the ALT key
reverses the order of displays all open
movement). applications).

Alt Puts focus on first menu n\a n\a n\a


command (e.g. 'File').

3.3. Control Shortcut Keys

Key Function

CTRL + Z Undo

CTRL + X Cut

CTRL + C Copy

CTRL + V Paste

CTRL + N New

CTRL + O Open

CTRL + P Print

CTRL + S Save

CTRL + B Bold*

CTRL + I Italic*

CTRL + U Underline*

* These shortcuts are suggested for text formatting applications, in the context for
which they make sense. Applications may use other modifiers for these operations.
The following edits, questions, and checks should be considered for all numeric fields.

Edit / Question Example

Maximum Value & Minimum Value • Edit Picture (z, 9, #, etc.)


• Field Width
• Boundaries (Upper Limit, Lower Limit)
• Positive and Negative Numbers
• Precision (Whole Numbers and Decimal Places)

• Signed or Unsigned

Delta - Smallest increment used by system • Whole Numbers


• Fractions

• Decimal Value

Other Tests • Overflow


• Underflow
• Rounding

• Floating Point Errors

Formats • Currency (Symbol, Separators, Commas & Period)


• Input
• Storage
• Output
• Display
• Print
• Integer (16, 32, 64)
• Floating Point
• Binary
• Packed
• Hex, Octal, Scientific Notation
• Placement of Negative Indicator
• -, CB, ( ) Leading or Trailing

• Word Boundaries

Attributes • Position (Display or Print)


• Color (Red for Negative)
• Intensity
• Blinking
• Font Size

• Italics

Zero • Leading 0123


• Trailing 123.0

• Absent 123.

Spaces Before or After Entry • Permitted?

• Self Correcting?

Alternative Formats • Display values in thousands or millions


• Display as words "One Thousand"

• Roman numerals

Error Message • When displayed


• Where displayed
• Should they be acknowledged?

• Automatic Recovery?

Initialization • Starting Value


• Null Value

• Reset Value

Reasonableness Checks .

Entry Format • Character

• Numeric

Match other formats • Usage in a calculator

• Appended to another field

Display Issues • Blank on either side of field to prevent touching another field
• 123 123 vs. 123123

• Display with leading zero

Will conversion take place? .


Source of Value • Will it change?
• Multiple?

• Alternative Source

Can value be computed? .

Balancing instructions Audit Issues

Encrypted storage .

Is a check digit required? Check digit computation

Validation • Table
• Computation

• Other report

Does field have other use? • SSN, SIN, Employee ID


• Salary, Speed
• Date

• Lookup Key

Naming Conventions .

Compiler Requirements .

Note the edits that are performed by the programming language, tests that should be handled during unit testing, and checks that should be done via integration or
system testing.

Other issues:
1. Will boundaries and limits change over time?
2. Are they influenced by something else?
3. Will field accept operators? +, -, /, *, !, **, ^, %
4. Will the value change format?
64 bit to 16 bit
Character to numeric
Display to packed
Display to scientific notation
Display to words
5. Will value move across platforms?
6. Why is the field being treated as numeric?
7. Will voice recognition be necessary?

Checklist: Additional Testing Concerns

• * Memory: availability, high, low, virtual, swapping, page size


• * Resource competition on the system
• * Processing: batch, on-line, multiple input sources
• * Conflicts: anti-viral software, automated test tools, TSR's, security software, IRQs
• * Backup: disaster recovery, backups, rerun capability
• * Connectivity: band width, interoperability, modem speed, ISDN lines, distributed applications
• * Security: passwords, fire walls
• * CD-Rom access and speed
• * File conversions
• * Design of client server architecture
• * Version Controls
• * Display issues: graphics, monitor size, graphic cards
• * Printer: type, speed, color, resolution, paper weight, paper size, envelopes, multiple printers
• * Platform: mainframe, minicomputer, microcomputer, number of platforms
• * Multiple operating systems
• * Transactions: size, quantity, rate
• * Error processing: message source, location, timing, acknowledgements
The following edits, questions, and checks should be considered for all date fields. Be aware that many programming languages combine date and time into one
data type.
Edit / Question Example
Required entry .
Century display 1850, 1999, 2001
Implied century Display last two digits of year (96,02). All dates are assumed to be
between 1950 and 2049.
Date display format • mm-dd-yy (12/01/96)
• mm-dd-ccyy (12/01/1996)
• dd-mm-yy (01/12/96)
• dd-mm-ccyy (01/12/1996)
• dd-mmm-yy (01-Jan-96)
• dd-mmm-ccyy (01-Jan-1996)
• dd-mm (day and month only)
• Complete date (December 1, 1996)
• Date, abbreviated month (Dec 1, 1996)
• Day included in date (Monday, November 7, 1996)
• yymmdd (960105)
• ccyymmdd (20011231)
• No year, text month and day (May 30th)
• Financial calculator (12.0196)

• System format (provided through system)


Date separator • Slash (/), dash (-), Period (.), space
• Enter separator fields
• Move over separators

• Automatic skip over separators


Leading zeros in day field 01, 02, 09 (12/05/96 vs 12/5/96)
Leading zeros in month field 01, 02, 09 (05/17/97 vs 5/17/97)
Abbreviate month name Jan, Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct, Nov, Dec
Can day numbers exceed actual? May 32, Feb 30 (Accounting systems may use for adjusting
transactions.)
Embedded spaces • No leading spaces
• No trailing spaces

• One space permitted after month and comma


Font attributes • Color
• Italics
• Bold
• Size
• Blink

• Intensity
Leap year computations Any year number evenly divisible by 4, but not by 100, unless it is
also divisible by 400.

• 1996 leap year


• 2000 leap year
• 2004 leap year

• 2100 not a leap year


Relational edits Compare hire date to birth date.
Use financial calendars • 30/360
• 30 days per month, 360 days per year
• Actual/360

• actual days per month, 360 days per year


Entry mechanism • Display calendar
• (+/-) to change day number

• Function key / PF key


Default date • System date
• Last date entered

• Other date (order date, closing date, etc.)


Latest / earliest permissible date • Actual date

• Computed date
Authorization / Security required • Add
• Modify
• Delete

• View
Formats • Entry
• Storage (date, or relative day number)
• Print

• Display
Null date • 00/00/00 zeros

• bb/bb/bb spaces
Is the programming responsible for managing dates? .
Are autofill features utilized? .
Will the date field be used again elsewhere? .
Is this a standard date entry routine that is already tested? .
Are there other mechanisms to date stamp fields or records? .
Is the position of the date important? • On screen

• In a report
Are other events triggered by this date? .
Permissible dates • Holiday
• local, regional, national, international
• Weekend

• Specific day(s) of week


Is the Julian date required? .
Sorting requirements • Normal
• Relative day number
• Unusual

• 9's compliment, yymmdd, ccyymmdd


Time zone issues .
Is system voice enabled for date entry? .
Is the date encrypted? Encryption technique
Testing Must entry dates correspond to dates in the test bed?
Risk factors What is the risk inherent in not entering the date correctly.
Edit date • On entry
• When screen is complete
• When record is complete

• After other event


Are incomplete dates permissible? • 12/00/1996
• 12/??/1996
• 12/01/????

• 12/01/??
Font • Acceptable fonts
• Largest font size that will display properly

• Default font
Correction Can erroneous dates be automatically corrected?
Error messages • Content
• Placement
• When displayed

• Can processing continue with bad date?

Note the edits that are performed by the programming language, tests that should be handled during unit testing, and checks that should be done via integration or
system testing.

Other issues:
1. Can invalid dates be passed to this routine? Should they be accepted?
2. Is there a standard date entry routine in the library?
3. Can new date formats be easily added and edited?
4. What is the source of the date: input documents, calendar on the wall, or field on another document?
5. Are there other mechanisms to change dates outside of this program?
6. Is this a date and time field?

Checklist: Developing Windows Application


Modal Windows - Often times modal windows which must be acted upon end up hidden behind standard windows. This gives the user the impression that the
system has locked up.

Special Characters - Special characters may not be used on some windows entry screens, there also may be some conflicts with converting data or using data
from other systems.

Printer Configuration - Although Windows is designed to handle the printer setup for most applications, there are formatting differences between printers and
printer types. LaserJet printers do not behave the same as inkjets, nor do 300, 600, or 1200 DPI laser printers behave the same across platforms.

Date Formats - The varying date formats sometimes cause troubles when they are being displayed in windows entry screens. This situation could occur when
programs are designed to handle a YY/MM/DD format and the date format being used is YYYY/MMM/DD.

Screen Savers - Some screen savers such as After Dark are memory or resource ‘hogs’ and have been known to cause troubles when running other applications.

Speed Keys - Verify that there are no conflicting speed keys on the various screens. This is especially important on screens where the buttons change.

Virus Protection Software - Some virus protection software can be configured too strictly. This may cause applications to run slowly or incorrectly.

Disk Compression Tools - Some disk compression software may cause our applications to run slowly or incorrectly.

Multiple Open Windows - How does the system handle having multiple open windows, are there any resource errors.

Test Multiple Environments - Programs need to be tested under multiple configurations. The configurations seem to cause various results.

Test Multiple Operating Systems - Programs running under Win 95, Win NT, and Windows 3.11 do not behave the same in all environments.

Corrupted DLL’s - Corrupted DLL’s will sometime cause applications not to execute or more damaging to run sporadically.

Incorrect DLL Versions - Corrupted DLL’s will sometime cause our applications not to execute or more damaging to run sporadically.

Missing DLL’s - Missing DLL’s will usually cause our applications not to execute.

Standard Program Look & Feel - The basic windows look & feel should be consistent across all windows and the entire application. Windows buttons, windows
and controls should follow the same standards for sizes.

Tab Order - When pressing the TAB key to change focus from object to object the procession should be logical.
Completion of Edits - The program should force the completion of edits for any screen before users have a change to exit program.

Saving Screen Sizes - Does the user have an opportunity to save the current screen sizes and position?

Operational Speed - Make sure that the system operates at a functional speed, databases, retrieval, and external references.

Testing Under Loaded Environments - Testing system functions when running various software programs "RESOURCE HOGS" (MS Word, MS Excel, WP, etc.).

Resource Monitors - Resource monitors help track Windows resources which when expended will cause GPF’s.

Video Settings - Programmers tend to program at a 800 x 600 or higher resolution, when you run these programs at a default 640 x 480 it tends to overfill the
screen. Make sure the application is designed for the resolution used by customers.

Clicking on Objects Multiple Times - Will you get multiple instances of the same object or window with multiple clicks?

Saving Column Orders - Can the user save the orders of columns of the display windows?

Displaying Messages saying that the system is processing - When doing system processing do we display some information stating what the system is doing?

Clicking on Other Objects While the System is Processing - Is processing interrupted? Do unexpected events occur after processing finishes?

Large Fonts / Small Fonts - When switching between windows font sizes mixed results occur when designing in one mode and executing in another.

Maximizing / Minimizing all windows - Do the actual screen elements resize? Do we use all of the available screen space when the screen is maximized.

Setup Program - Does your setup program function correctly across multiple OS’s. Does the program prompt the user before overwriting existing files.

Consistency in Operation - Consistent behavior of the program in all screens and the overall application.

Multiple Copies of the same Window - Can the program handle multiple copies of the same window? Can all of these windows be edited concurrently?

Confirmation of Deletes - All deletes should require confirmations of the process before execution.

Selecting alternative language options - Will your program handle the use of other languages (FRENCH, SPANISH, ITALIAN, etc.)
Build the Plan
1. Analyze the product.

• What to Analyze
• Users (who they are and what they do)
• Operations (what it’s used for)
• Product Structure (code, files, etc.)
• Product Functions (what it does)
• Product Data (input, output, states, etc.)
• Platforms (external hardware and software)
• Ways to Analyze
• Perform product/prototype walkthrough.
• Review product and project documentation.
• Interview designers and users.
• Compare w/similar products.
• Possible Work Products
• Product coverage outline
• Annotated specifications
• Product Issue list
• Status Check
• Do designers approve of the product coverage outline?
• Do designers think you understand the product?
• Can you visualize the product and predict behavior?
• Are you able to produce test data (input and results)?
• Can you configure and operate the product?
• Do you understand how the product will be used?
• Are you aware of gaps or inconsistencies in the design?
• Do you have remaining questions regarding the product?

2. Analyze product risk.

• What to Analyze
• Threats
• Product vulnerabilities
• Failure modes
• Victim impact
• Ways to Analyze
• Review requirements and specifications.
• Review problem occurrences.
• Interview designers and users.
• Review product against risk heuristics and quality criteria categories.
• Identify general fault/failure patterns.
• Possible Work Products
• Component risk matrices
• Failure mode outline
• Status Check
• Do the designers and users concur with the risk analysis?
• Will you be able to detect all significant kinds of problems, should they occur during testing?
• Do you know where to focus testing effort for maximum effectiveness?
• Can the designers do anything to make important problems easier to detect, or less likely to occur?
• How will you discover if your risk analysis is accurate?

3. Design test strategies.

• General Strategies
• Domain testing (including boundaries)
• User testing
• Stress testing
• Regression testing
• Sequence testing
• State testing
• Specification-based testing
• Structural testing (e.g. unit testing)

• Ways to Plan
• Match strategies to risks and product areas.
• Visualize specific and practical strategies.
• Look for automation opportunities.
• Prototype test probes and harnesses.
• Don’t overplan. Let testers use their brains.
• Possible Work Products
• Itemized statement of each test strategy chosen and how it will be applied.
• Risk/task matrix.
• List of issues or challenges inherent in the chosen strategies.
• Advisory of poorly covered parts of the product.
• Test cases (if required)
• Status Check
• Do designers concur with the test strategy?
• Has the strategy made use of every available resource and helper?
• Is the test strategy too generic could it just as easily apply to any product?
• Will the strategy reveal all important problems?

4. Plan logistics.

• Logistical Areas
• Test effort estimation and scheduling
• Testability engineering
• Test team staffing (right skills)
• Tester training and supervision
• Tester task assignments
• Product information gathering and management
• Project meetings, communication, and coordination
• Relations with all other project functions, including development
• Test platform acquisition and configuration
• Possible Work Products
• Issues list
• Project risk analysis
• Responsibility matrix
• Test schedule
• Agreements and protocols
• Test tools and automation
• Stubbing and simulation needs
• Test suite management and maintenance
• Build and transmittal protocol
• Test cycle administration
• Problem reporting system and protocol
• Test status reporting protocol
• Code freeze and incremental testing
• Pressure management in end game
• Sign-off protocol
• Evaluation of test effectiveness
• Status Check
• Do the logistics of the project support the test strategy?
• Are there any problems that block testing?
• Are the logistics and strategy adaptable in the face of foreseeable problems?
• Can you start testing now and sort out the rest of the issues later?

5. Share the plan.

• Ways to Share
• Engage designers and stakeholders in the test planning process.
• Actively solicit opinions about the test plan.
• Do everything possible to help the developers succeed.
• Help the developers understand how what they do impacts testing.
• Talk to technical writers and technical support people about sharing quality information.
• Get designers and developers to review and approve all reference materials.
• Record and reinforce agreements.
• Get people to review the plan in pieces.
• Improve reviewability by minimizing unnecessary text in test plan documents.
• Goals
• Common understanding of the test process.
• Common commitment to the test process.
• Reasonable participation in the test process.
• Management has reasonable expectations about the test process.
• Status Check
• Is the project team paying attention to the test plan?
• Does the project team, especially first line management, understand the role of the test team?
• Does the project team feel that the test team has the best interests of the project at heart?
• Is there an adversarial or constructive relationship between the test team and the rest of the project?

Does any member of the project team feel that the testers are “off on a tangent” rather than focused on important testing tasks?

Test Plan
A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test
plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the
test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group
will read it. The following are some of the items that might be included in a test plan, depending on the particular project:
• Title
• Identification of software including version/release numbers
• Revision history of document including authors, dates, approvals
• Table of Contents
• Purpose of document, intended audience
• Objective of testing effort
• Software product overview
• Relevant related document list, such as requirements, design documents, other test plans, etc.
• Relevant standards or legal requirements
• Traceability requirements
• Relevant naming conventions and identifier conventions
• Overall software project organization and personnel/contact-info/responsibilties
• Test organization and personnel/contact-info/responsibilities
• Assumptions and dependencies
• Project risk analysis
• Testing priorities and focus
• Scope and limitations of testing
• Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable
• Outline of data input equivalence classes, boundary value analysis, error classes
• Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems
• Test environment validity analysis - differences between the test and production systems and their impact on test validity.
• Test environment setup and configuration issues
• Software migration processes
• Software CM processes
• Test data setup requirements
• Database setup requirements
• Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs
• Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs
• Test automation - justification and overview
• Test tools to be used, including versions, patches, etc.
• Test script/test code maintenance processes and version control
• Problem tracking and resolution - tools and processes
• Project test metrics to be used
• Reporting requirements and testing deliverables
• Software entrance and exit criteria
• Initial sanity testing period and criteria
• Test suspension and restart criteria
• Personnel allocation
• Personnel pre-training needs
• Test site/location
• Outside test organizations to be utilized and their purpose, responsibilities, deliverables, contact persons, and coordination issues
• Relevant proprietary, classified, security, and licensing issues.
• Open issues
• Appendix - glossary, acronyms, etc.

Test Management and Planning


Effective and timely planning can have a huge impact on your testing success. To proven test planning methods and techniques, including the master test plan and
specific test plans for acceptance, systems, integration, and unit testing. How to manage test activities, estimate test effort, analyze risks, achieve buy-in. test
measurement and reporting tactics for monitoring and control.
Who should be writing the testing plans and when this should start. Should these plans be written by the developer or the publisher? Should a draft be written at
the first playable build, or even eariler? Again, there is no one 'correct' answer; just factors that may help you decide what is right for your environment.
Having the developer write the test plan can help keep the integrity of the original product and can help ensure that a more thorough test plan is written. Having the
publisher write the test plan can help free up developers to work in their respective disciplines. Writing the test plan early in the development stage can also help
the developer discover any problems before they appear, while waiting until later in the development cycle can make for a more thorough test plan.
Having the development team write the test plan can overwhelm the developer and prolong the development time. Having the pulisher write the test plan may
completely miss the mark on what to test for. Writing the test plan early in the development stage can lead to makeing too vague a test plan, while waiting until
later in the developer cycle can make for a test plan that is overly complex.
It's not a bad idea to have the developer start a preliminary draft of a test plan and then pass it off to the pulisher to write the final stages of the test plan. Also,
handing over the design document can help the publisher write a more thorough test plan. Ideally a test plan should be a living document. The test plan should be
started as early as possible and b e continually updated and revised as the development cycle moves forward
The Classic Test Planning Model
The classic test planning model break the testing process down into thress phases:

1. Unit, module and component testing


2. Integration testing
3. System Testing
4. Acceptance Testing

The Sample of the test plan


Title
Submitted to: [Name] [Address]
Submitted by: [Name] [Address]
Document No
Contract No
Date
Approvals: [Person Name] [Person Title] [Business Name]
Table of Contents

1. Introduction
• Purpose
• Scope
2. Applicability
• Applicable Documents
• Documents
3. Program Management and Planning
• The SQA Plan
• Organization
• Tasks
4. Software Training
• SQA Personnel
• Software Developer Training Certification
5. SQA Program Requirements
• Program Resources Allocation Monitoring
• SQA Program Audits
1. Scheduled Audits
2. Unscheduled Audits
3. Audits of the SQA Organization
4. Audit Reports
• SQA Records
• SQA Status Reports
• Software Documentation
• Requirements Traceability
• Software Development Process
• Project reviews
1. Formal Reviews
2. Informal Reviews
• Tools and Techniques
• Software Configuration Management
• Release Procedures
• Change Control
• Problem Reporting
• Software Testing
1. Unit Test
2. Integration Test
3. System Testing
4. Validation Testing

Attachment 1 Coding Documentation Guidelines


Attachment 2 Testing Requirements
Test Plan Template
Test Plan Template
(Name of the Product)
TABLE OF CONTENTS
1.0 INTRODUCTION
2.0 OBJECTIVES AND TASKS
2.1 Objectives
2.2 Tasks
3.0 SCOPE
4.0 Testing Strategy
4.1 Alpha Testing (Unit Testing)
4.2 System and Integration Testing
4.3 Performance and Stress Testing
4.4 User Acceptance Testing
4.5 Batch Testing
4.6 Automated Regression Testing
4.7 Beta Testing
5.0 Hardware Requirements
6.0 Environment Requirements
6.1 Main Frame
6.2 Workstation
7.0 Test Schedule
8.0 Control Procedures
9.0 Features to Be Tested
10.0 Features Not to Be Tested
11.0 Resources/Roles & Responsibilities
12.0 Schedules
13.0 Significantly Impacted Departments (SIDs)
14.0 Dependencies
15.0 Risks/Assumptions
16.0 Tools
17.0 Approvals
18.0 References
Appendices
Test Plan Driver Method
The "Test Plan Driver" method preserves most of the advantages of the "Function Decomposition" method, while eliminating most of the disadvantages. In this
method, the entire testing process is data-driven including functionality. The detail Test Plan is written in a specific format, then saved in a particular record-format
which the pre-written "Utility" scripts use to control the entire processing of the Automated Test.
Example:
This example shows a Test case document developed by the tester using a spreadsheet containing "Key-Words" in Column 1. In this method, the entire process is
data-driven, including functionality. The Key Word control the processing. Note that this test case could also be executed manually if necessary.
-----------------------------------------------------------------------------
| Column 1 | Column 2 | Column 3 | Column 4 | Column 5 |
| | | | | |
| Key_Word | Field/Window | Input/Verification | Comment | Pass/Fail |
| | Name | Data | | |
-----------------------------------------------------------------------------
| | | | Verify | |
| Star_Test | Window | Main Menu | Starting | |
| | | | Point | |
-----------------------------------------------------------------------------
| | | | Select | |
| Enter | Selection | 3 | Payment | |
| | | | Option | |
-----------------------------------------------------------------------------
| | | | Access | |
| Action | Press_Key | Enter | Payment | |
| | | | Screen | |
-----------------------------------------------------------------------------
| | | | Verify | |
| Verify | Window | Payment postig | Screen | |
| | | | accessed | |
-----------------------------------------------------------------------------
| | | | Enter | |
| Enter | Payment | 125.87 | Payment | |
| | Amount | | data | |
-----------------------------------------------------------------------------
| | | | | |
| | Payment | Check | | |
| | Method | | | |
-----------------------------------------------------------------------------
| | | | | |
| Action | Press_Button | Post | Process | |
| | | | Payment | |
-----------------------------------------------------------------------------
| | | | Verify | |
| Verify | Window | Payment Posting | screen | |
| | | | remains | |
-----------------------------------------------------------------------------
| | | | Verify | |
| Verify | Payment | 125.87 | updated | |
| Data | Amount | | data | |
-----------------------------------------------------------------------------
| | | | | |
| | Current | 1,309.77 | | |
| | Balance | | | |
-----------------------------------------------------------------------------
| | | | | |
| | Status | Payment Posted | | |
| | Message | | | |
-----------------------------------------------------------------------------
| | | | | |
| Action | Press | Exit | Return to | |
| | Button | | Main Menu | |
-----------------------------------------------------------------------------
| | | | Verify | |
| End | Window | Main Menu | Return to | |
| Test | | | Main Menu | |
-----------------------------------------------------------------------------
Test Plan Outline

1. BACKGROUND
2. INTRODUCTION
3. ASSUMPTIONS
4. TEST ITEMS
List each of the items (programs) to be tested.
5. FEATURES TO BE TESTED
List each of the features (functions or requirements) which will be tested or demonstrated by the test.
6. FEATURES NOT TO BE TESTED
Explicitly lists each feature, function, or requirement which won't be tested and why not.
7. APPROACH
Describe the data flows and test philosophy.
Simulation or Live execution, Etc.
8. ITEM PASS/FAIL CRITERIA Blanket statement
Itemized list of expected output and tolerances
9. SUSPENSION/RESUMPTION CRITERIA
Must the test run from start to completion?
Under what circumstances may it be resumed in the middle?
Establish check-points in long tests.
10. TEST DELIVERABLES
What, besides software, will be delivered?
Test report
Test software
11. TESTING TASKS Functional tasks (e.g., equipment set up)
Administrative tasks
12. ENVIRONMENTAL NEEDS
Security clearance
Office space & equipment
Hardware/software requirements
13. RESPONSIBILITIES
Who does the tasks in Section 10?
What does the user do?
14. STAFFING & TRAINING
15. SCHEDULE
16. RESOURCES
17. RISKS & CONTINGENCIES
18. APPROVALS

Test plan
define the testing approach and resources and schedule the testing activities.
Business requirement
Specify requirements of testing and identify the specific features to be test by design.
Test case
Define a test case identified by a test-design specification.
|---------------------------------------------------------|
| Test Case |
|---------------------------------------------------------|
| Test Case ID: |
| |
| Test Description: |
| |
| Revision History: |
| |
| Date Created: |
| |
| Function to be tested: |
|---------------------------------------------------------|
| Environment: |
| |
| Test Setup: |
| |
| Test Execution: |
| |
| 1. |
| |
| 2. |
| |
| 3. |
|---------------------------------------------------------|
| |
| Expect Results: |
| |
| Actual Results: |
|---------------------------------------------------------|
| Completed: |
| |
| Signed Out: |
|---------------------------------------------------------|
Test Case Form The test case form is used to track all the test cases. It should include the test case numberer for all the tests being performed, the name of the
test case, the process, the business application condition that was tested, all associated scenarios, and the priority of the test case. The form should also includr
the date, the page number of the particular form you are using, and the system and integration information. This form is important because it tracks all the test
cases, allows the test lead or another tester to reference the test case, and shows all the pertinent information of thecase at a glance. This information should be
placed in a database or Web site so all members of the team can review the information.
|----------------------------------------------------------|--------------|
| Test Case | Page: |
|----------------------------------------------------------|--------------|
| System: | Date: |
|----------------------------------------------------------|--------------|
| Test | Test | | Applicarion | Associated | |
| Case # | Case Name | Process | Conditions | Tasks | Priority |
|--------|-----------|----------|-------------|------------|--------------|
| | | | | | |
|--------|-----------|----------|-------------|------------|--------------|
| | | | | | |
|-------------------------------------------------------------------------|
Log for tracking test cases Track test cases and test results
Test Case Tracking
|-------------------------------------------------------------------------|
| | | Test | Test | Desired | Actual |
| Date | Function | Case # | Scenario | Results | Results |
|--------|-----------|----------|-------------|------------|--------------|
| | | | | | |
|--------|-----------|----------|-------------|------------|--------------|
| | | | | | |
|-------------------------------------------------------------------------|
Test matrix Track test cases and errors
|------------------------------------------------------------------------------|
| Test Case: | Test | Test Cases | Fass/ | No. of| Bug# | |
| File Open# | Description | Samples | Fail | Bugs | | Comments |
|-------------|-----------------|------------|-------|-------|------|----------|
| 1.1 |Test file types | 1.1 | P/F | # | # | |
| |Support by | | | | | |
| |the program | | | | | |
|-------------|-----------------|------------|-------|-------|------|----------|
| 1.2 |Verify the | 1.2 | P/F | # | # | |
| |different ways | | | | | |
| |to open file | | | | | |
| |(mouse, keyboard,| | | | | |
| | and accelerated | | | | | |
| | keys). | | | | | |
|-------------|-----------------|------------|-------|-------|------|----------|
| 1.3 |Verify the file | 1.3 | P/F | # | # | |
| |that can be | | | | | |
| |opened from the | | | | | |
| |local dirves as | | | | | |
| |well as network | | | | | |
| | | | | | | |
|------------------------------------------------------------------------------|
Bug tracking report Track errors as they occur and how they were corrected.
|---------------------------------------------------------|
| [Bug's Report Title] |
|---------------------------------------------------------|
| [Steps Involved to Reproduce the Error] |
|---------------------------------------------------------|
| [Expected Result] |
|---------------------------------------------------------|
| [Actual Result] |
|---------------------------------------------------------|
| [Note] |
|---------------------------------------------------------|
Weekly status report Give management a weekly progress report of the testing activity.
|------------------------------------------------------------|
| Status for [Person's Name] |
| Week Ending [End of Week Date] |
|------------------------------------------------------------|
| This Week: |
| 1. Details of the progress of the week, what was scheduled|
| 2. |
| 3. |
|------------------------------------------------------------|
| Goals for Next Week: |
| 1. Detail of what should be accomplished for the next |
| week |
| Issues: |
| 1. Issues that need to be addressed and handled. |
| 2. |
| 3. |
|------------------------------------------------------------|
Test Log Adminstrator The log tracks all informatiom from the previous example. This log will track test log IDs, test case or test script ID, the test event results,
the action that was taken, and the date the action was taken. The log will also document the system on which the test was runrun and the page number for the log.
This log is important for tracking all the test logs and showing at a glance the event that resulted and the action taken from the test. This information is critical for
tracking and registering all log entries.
Test log
|-------------------------------------------------------------------------|
| Test Log |
|-------------------------------------------------------------------------|
| System: | Page: |
|----------------------------------------------------------|--------------|
| Test | Test Case | | | Action |
| Log ID | Test Script ID | Test Event Result | Action| Date |
|--------|----------------|------------------------|-------|--------------|
| | | | | |
|--------|----------------|------------------------|-------|--------------|
| | | | | |
|-------------------------------------------------------------------------|
Test script Set up the sequential step-by-step test, giveing expected and actual results.
|-------------------------------------------------------------------------|
| Test Script |
|----------------------------------------------------------|--------------|
| Test Script Number | Piority: |
|----------------------------------------------------------|--------------|
| System Tested | Page: |
|----------------------------------------------------------|--------------|
| Test case Number: | Data: | Tester: |
|-----------------------------------|----------------------|--------------|
| / | | | | Expected | Actual | Test |
|v | Step | Action | Data Entry | Results | Results | Log ID |
|---|------|----------|-------------|----------|-----------|--------------|
| | | | | | | |
|---|------|----------|-------------|----------|-----------|--------------|
| | | | | | | |
|-------------------------------------------------------------------------|
Test Case Information
|---------------------------------------------------------|
| Test Case ID: F1006 |
| Test Description: Verify A |
| Revision History: Refer to form F1050 |
| Date Created: 3/23/00 1.0 - Tester Name - Created |
| Function to be Tested: A |
|---------------------------------------------------------|
| Environment: Windows 2000 |
| Test Setup: N/A |
| 1. Open the program |
| 2. Open a new document |
| 3. Type the text |
| 4. select the text |
|---------------------------------------------------------|
| Expected Result: A informats correctly to the text |
| Actual Results: Pass |
| |
| Completed: Date |
| Signed Out: Name of Tester |
|---------------------------------------------------------|
Issues log Itemize and track specific testing issues and resolutions.
|---------------------------------------------------------------------|
|Ref# |Type of Issue | Priority | Description |
|-----|--------------|-----------|------------------------------------|
| | | | |
|-----|--------------|-----------|------------------------------------|
| | | | |
|---------------------------------------------------------------------|
Resolution The resultion log is used to track issues and how they have been resolved. It uses the reference number assigned to previous documents, the status
of the problem, the last action taken on the problem, and who took that action. It will also report who make the decision for the resolution and how the resolution
will be handled. This document will show the testers what is being done for documented problems and if their test is contingent on the resolution of a previous bug
or problem.
|--------------------------------------------------------------------|
| | | Last | | | | |
| | | Action| | Parties | Decision | |
|Ref# |Status | Date | Action | Involved | Made | Resolution |
|-----|-------|-------|--------|----------|----------|---------------|
| | | | | | | |
|-----|-------|-------|--------|----------|----------|---------------|
| | | | | | | |
|--------------------------------------------------------------------|
Test Bed The test bed is the testing environment used for all stages of testing
|-----------------------------------------------------------------------------------|
| Test Bed |
|------------------------|--------------------|-------------------------------------|
| Number of Application: | Date: | Lead Engineer Assigned to project |
|---------------------------------------|-------------------------------------------|
| Dates Application Will be Tested | Anticipated Problem |
|-------------------------------|------------------------------|--------------------|
| Dates for Setting Up Test Bed:| Engineer Assigned to Project:| Addition Resources |
|-------------------------------|------------------------------|--------------------|
| Software / Hardware | Verision / TypeEngineer | Problems |
|-------------------------------|------------------------------|--------------------|
| | | |
|-------------------------------|------------------------------|--------------------|
| | | |
|-------------------------------|------------------------------|--------------------|
| | | |
|-----------------------------------------------------------------------------------|
What makes a good software tester?

1. Know Programming. Might as well start out with the most controversial one. There's a popular myth that testing can be staffed with people who have little or no
programming knowledge. It doesn't work, even though it is an unfortunately common approach. There are two main reasons why it doesn't work.

(1) They're testing software. Without knowing programming, they can't have any real insights into the kinds of bugs that come into software and the likeliest place
to find them. There's never enough time to test "completely", so all software testing is a compromise between available resources and thoroughness. The tester
must optimize scarce resources and that means focusing on where the bugs are likely to be. If you don't know programming, you're unlikely to have useful intuition
about where to look.
(2) All but the simplest (and therefore, ineffectual) testing methods are tool- and technology-intensive. The tools, both as testing products and as mental
disciplines, all presume programming knowledge. Without programmer training, most test techniques (and the tools based on those techniques) are unavailable.
The tester who doesn't know programming will always be restricted to the use of ad-hoc techniques and the most simplistic tools.

Taking entry-level programmers and putting them into a test organization is not a good idea because:

(1) Loser Image.


Few universities offer undergraduate training in testing beyond "Be sure to test thoroughly." Entry-level people expect to get a job as a programmer and if they're
offered a job in a test group, they'll often look upon it as a failure on their part: they believe that they didn't have what it takes to be a programmer in that
organization. This unfortunate perception exists even in organizations that values testers highly.

(2) Credibility With Programmers.


Independent testers often have to deal with programmers far more senior than themselves. Unless they've been through a coop program as an undergraduate, all
their programming experience is with academic toys: the novice often has no real idea of what programming in a professional, cooperative, programming
environment is all about. As such, they have no credibility with their programming counterpart who can sluff off their concerns with "Look, kid. You just don't
understand how programming is done here, or anywhere else, for that matter." It is setting up the novice tester for failure.

(3) Just Plain Know-How.


The programmer's right. The kid doesn't know how programming is really done. If the novice is a "real" programmer (as contrasted to a "mere tester") then the
senior programmer will often take the time to mentor the junior and set her straight: but for a non-productive "leech" from the test group? Never! It's easiest for the
novice tester to learn all that nitty-gritty stuff (such as doing a build, configuration control, procedures, process, etc.) while working as a programmer than to have
to learn it, without actually doing it, as an entry-level tester.

2. Know the Application.


That's the other side of the knowledge coin. The ideal tester has deep insights into how the users will exploit the program's features and the kinds of cockpit errors
that users are likely to make. In some cases, it is virtually impossible, or at least impractical, for a tester to know both the application and programming. For
example, to test an income tax package properly, you must know tax laws and accounting practices. Testing a blood analyzer requires knowledge of blood
chemistry; testing an aircraft's flight control system requires control theory and systems engineering, and being a pilot doesn't hurt; testing a geological application
demands geology. If the application has a depth of knowledge in it, then it is easier to train the application specialist into programming than to train the programmer
into the application. Here again, paralleling the programmer's qualification, I'd like to see a university degree in the relevant discipline followed by a few years of
working practice before coming into the test group.

3. Intelligence.
Back in the 60's, there were many studies done to try to predict the ideal qualities for programmers. There was a shortage and we were dipping into other fields for
trainees. The most infamous of these was IBM's programmers' Aptitude Test (PAT). Strangely enough, despite the fact the IBM later repudiated this test, it
continues to be (ab)used as a benchmark for predicting programmer aptitude. What IBM learned with follow-on research is that the single most important quality
for programmers is raw intelligence-good programmers are really smart people-and so are good testers.
4. Hyper-Sensitivity to Little Things.
Good testers notice little things that others (including programmers) miss or ignore. Testers see symptoms, not bugs. We know that a given bug can have many
different symptoms, ranging from innocuous to catastrophic. We know that the symptoms of a bug are arbitrarily related in severity to the cause. Consequently,
there is no such thing as a minor symptom-because a symptom isn't a bug. It is only after the symptom is fully explained (i.e., fully debugged) that you have the
right to say if the bug that caused that symptom is minor or major. Therefore, anything at all out of the ordinary is worth pursuing. The screen flickered this time, but
not last time-a bug. The keyboard is a little sticky-another bug. The account balance is off by 0.01 cents-great bug. Good testers notice such little things and use
them as an entree to finding a closely-related set of inputs that will cause a catastrophic failure and therefore get the programmers' attention. Luckily, this attribute
can be learned through training.

5. Tolerance for Chaos.


People react to chaos and uncertainty in different ways. Some cave in and give up while others try to create order out of chaos. If the tester waits for all issues to
be fully resolved before starting test design or testing, she won't get started until after the software has been shipped. Testers have to be flexible and be able to
drop things when blocked and move on to another thing that's not blocked. Testers always have many (unfinished) irons in the fire. In this respect, good testers
differ from programmers. A compulsive need to achieve closure is not a bad attribute in a programmer-certainly serves them well in debugging-in testing, it means
nothing gets finished. The testers' world is inherently more chaotic than the programmers'.

A good indicator of the kind of skill I'm looking for here is the ability to do crossword puzzles in ink. This skill, research has shown, also correlates well with
programmer and tester aptitude. This skill is very similar to the kind of unresolved chaos with which the tester must daily deal. Here's the theory behind the notion.
If you do a crossword puzzle in ink, you can't put down a word, or even part of a word, until you have confirmed it by a compatible cross-word. So you keep a
dozen tentative entries unmarked and when by some process or another, you realize that there is a compatible cross-word, you enter them both. You keep score
by how many corrections you have to make-not by merely finishing the puzzle, because that's a given. I've done many informal polls of this aptitude at my
seminars and found a much higher percentage of crossword-puzzles-in-ink afficionados than you'd get in a normal population.
6. People Skills.
Here's another area in which testers and programmers can differ. You can be an effective programmer even if you are hostile and anti-social; that won't work for a
tester. Testers can take a lot of abuse from outraged programmers. A sense of humor and a thick skin will help the tester survive. Testers may have to be
diplomatic when confronting a senior programmer with a fundamental goof. Diplomacy, tact, a ready smile-all work to the independent tester's advantage. This may
explain one of the (good) reasons that there are so many women in testing. Women are generally acknowledged to have more highly developed people skills than
comparable men-whether it is something innate on the X chromosome as some people contend or whether it is that without superior people skills women are
unlikely to make it through engineering school and into an engineering career, I don't know and won't attempt to say. But the fact is there and those sharply-honed
people skills are important.
7. Tenacity.
An ability to reach compromises and consensus can be at the expense of tenacity. That's the other side of the people skills. Being socially smart and diplomatic
doesn't mean being indecisive or a limp rag that anyone can walk all over. The best testers are both-socially adept and tenacious where it matters. The best testers
are so skillful at it that the programmer never realizes that they've been had. Tenacious-my picture is that of an angry pitbull fastened on a burglar's rear-end. Good
testers don You can't intimidate them-even by pulling rank. They'll need high-level backing, of course, if they're to get you the quality your product and market
demands.

8. Organized.
I can't imagine a scatter-brained tester. There's just too much to keep track of to trust to memory. Good testers use files, data bases, and all the other
accouterments of an organized mind. They make up checklists to keep themselves on track. They recognize that they too can make mistakes, so they double-
check their findings. They have the facts and figures to support their position. When they claim that there's a bug-believe it, because if the developers don't, the
tester will flood them with well-organized, overwhelming, evidence.

A consequence of a well-organized mind is a facility for good written and oral communications. As a writer and editor, I've learned that the inability to express
oneself clearly in writing is often symptomatic of a disorganized mind. I don't mean that we expect everyone to write deathless prose like a Hemingway or Melville.
Good technical writing is well-organized, clear, and straightforward: and it doesn't depend on a 500,000 word vocabulary. True, there are some unfortunate
individuals who express themselves superbly in writing but fall apart in an oral presentation- but they are typically a pathological exception. Usually, a well-
organized mind results in clear (even if not inspired) writing and clear writing can usually be transformed through training into good oral presentation skills.

9. Skeptical.
That doesn't mean hostile, though. I mean skepticism in the sense that nothing is taken for granted and that all is fit to be questioned. Only tangible evidence in
documents, specifications, code, and test results matter. While they may patiently listen to the reassuring, comfortable words from the programmers ("Trust me. I
know where the bugs are.")-and do it with a smile-they ignore all such in-substantive assurances.

10. Self-Sufficient and Tough.


If they need love, they don't expect to get it on the job. They can't be looking for the interaction between them and programmers as a source of ego-gratification
and/or nurturing. Their ego is gratified by finding bugs, with few misgivings about the pain (in the programmers) that such finding might engender. In this respect,
they must practice very tough love.
11. Cunning.
Or as Gruenberger put it, "low cunning." "Street wise" is another good descriptor, as are insidious, devious, diabolical, fiendish, contriving, treacherous, wily, canny,
and underhanded. Systematic test techniques such as syntax testing and automatic test generators have reduced the need for such cunning, but the need is still
with us and undoubtedly always will be because it will never be possible to systematize all aspects of testing. There will always be room for that offbeat kind of
thinking that will lead to a test case that exposes a really bad bug. But this can be taken to extremes and is certainly not a substitute for the use of systematic test
techniques. The cunning comes into play after all the automatically generated "sadistic" tests have been executed.

12. Technology Hungry.


They hate dull, repetitive, work-they'll do it for a while if they have to, but not for long. The silliest thing for a human to do, in their mind, is to pound on a keyboard
when they're surrounded by computers. They have a clear notion of how error-prone manual testing is, and in order to improve the quality of their own work, they'll
f ind ways to eliminate all such error-prone procedures. I've seen excellent testers re-invent the capture/playback tool many times. I've seen dozens of home-brew
test data generators. I've seen excellent test design automation done with nothing more than a word processor, or earlier, with a copy machine and lots of bottles
of white-out. I've yet to meet a tester who wasn't hungry for applicable technology. When asked why didn't they automate such and such-the answer was never "I
like to do it by hand." It was always one of the following: (1) "I didn't know that it could be automated", (2) "I didn't know that such tools existed", or worst of all, (3)
"Management wouldn't give me the time to learn how to use the tool."

13. Honest.
Testers are fundamentally honest and incorruptible. They'll compromise if they have to, but they'll righteously agonize over it. This fundamental honesty extends to
a brutally realistic understanding of their own limitations as a human being. They accept the idea that they are no better and no worse, and therefore no less error-
prone than their programming counterparts. So they apply the same kind of self-assessment procedures that good programmers will. They'll do test inspections
just like programmers do code inspections. The greatest possible crime in a tester's eye is to fake test results.
Personal Requirements For Software Quality Assurance Engineers

Challenges
Rapidly changing requirements
Foresee defects that are likely to happen in production
Monitor and Improve the software development processes
Ensure that standards and procedures are being followed
Customer Satisfaction and confidence
Compete the Market

Identifying Software Quality Assurance Personnel Needs:


Requirement Specification
Functional Specification
Technical Specification
Standards document and user manuals – If applicable (e.g. Coding standards document)
Test Environment Setup
Professional Characteristics of a good SQA Engineer
Understanding of business approach and goals of the organization
Understanding of entire software development process
Strong desire for quality
Establish and enforce SQA methodologies, processes and Testing Strategies
Judgment skills to assess high-risk areas of application
Communication with Analysis and Development team
Report defects with full evidence
Take preventive actions
Take actions for Continuous improvement
Reports to higher management
Say No when Quality is insufficient
Work Management
Meet deadlines

Personal Characteristics of a good SQA Engineer


Open Minded
Observant
Perceptive
Tenacious
Decisive
Diplomatic
Keen for further training/trends in QA
Part I - Software Test Tool Vendors and Products
Part II - Bug Tracking Software
Part III - Software Test Automation Tool Evaluation Criteria
Part IV - Comparing SilkTest and WinRunner

Part I - Software Test Tool Vendors and Products

Compuware
Compuware Corporation is a recognized industry leader in enterprise software and IT services that help maximize the value of technology investments. We offer a
powerful set of integrated solutions for enterprise IT including IT governance, application development, quality assurance and application service management.
Compuware is one of the largest software test tool vendors. It has a turnover in excess of $2 billion and staff of more than 15,000. 9,500 of these are professional
services staff with skills covering all the development lifecycle. Compuware does not only supply the tools but will provide staff to initially develop your test suite
and handover to internal staff as required.
Compuware's test tool set is second to Rational only on the windows platform (for coverage) but for complete coverage across platforms including mainframe and
Unix they are the best. So for the larger company that requires a complete testing solution to cover these platforms it is probably best to start with Compuware as
they will offer unit test, database test, mainframe, functional, load, web test, defect tracking and more in their tool set. No other vendor can offer this range.
Compuware Website http://www.compuware.com/
Compuware Software Test Tools
Compuware provides tools for Requirements Management, Risk-based Test Management, Unit, Functional and Load Testing, Test Data Management, and Quality
Discipline.
Compuware Application Reliability Solution (CARS) offers a more effective approach. CARS combines our patented methodology with innovative enterprise-wide
technologies and certified quality assurance expertise to instill a consistent discipline across development, quality assurance and operations. By following this
systematic testing approach, you:
- adhere to a consistent quality assurance process
- deliver the quality metrics required to make a sound go/no go decision
- ensure the most critical of business requirements are met
QACenter Enterprise Edition: Requirements Management Tool. Align testing with business requirements. With QACenter Enterprise Edition you can:
- prioritize testing activities through the assignment of risk
- align test requirements with business goals
- quickly measure progress and effectiveness of test activities
- centrally manage and execute various manual and automated testing assets
- automate the process of entering, tracking and resolving defects found during testing.
Compuware DevPartner: A family of products providing a comprehensive development, debugging and tuning solution to the challenges of application
development, from concept to coding to security and finally to completion. DevPartner products cover Microsoft, Java™, 64-bit and driver development, helping
you improve productivity and increase software reliability—from simple two-tier applications to complex distributed and web-based systems.
Xpediter: Analyze, test and debug mainframe applications. With Xpediter you can:
- Analyze programs and applications
- Test and debug programs interactively
- Understand and control the process of data and logic
- Identify what has executed within an application
- Debug DB2 Stored Procedures
- Test date and time related logic
File-AID: Test data management tool. Help you pull together test data from multiple source to create, move, convert, reformat, subset, and validate your test date
bed. The test methodology has helped organizations test more efficiently and effectively.

Rational
Rational is now part of IBM, which is leader in the invention, development and manufacture of the industry's most advanced information technologies, including
computer systems, software, storage systems and microelectronics. Rational offers the most complete lifecycle toolset (including testing).
When it comes to Object Oriented development they are the acknowledged leaders with most of the leading OO experts working for them. Some of their products
are worldwide leaders e.g. Rational Rose, Clearcase, RequistePro, etc.
Their Unified Process is a very good development model that I have been involved with which allows mapping of requirements to use cases, test cases and a
whole set of tools to support the process.
If you are developing products using an OO approach then you should include Rational in the evaluation.
Rational Website http://www-306.ibm.com/software/rational/
Rational Tools
Rational Functional Tester - An advanced, automated functional and regression testing tool for testers and GUI developers who need superior control for testing
Java, Microsoft Visual Studio .NET, and Web-based applications.
Rational Manual Tester - A manual test authoring and execution tool for testers and business analysts who want to improve the speed, breadth, and reliability of
their manual testing efforts. Promotes test step reuse to reduce the impact of software change on manual test maintenance activities.
Rational Performance Tester - IBM Rational Performance Tester is a load and performance testing solution for teams concerned about the scalability of their Web-
based applications. Combining ease of use with deep analysis capabilities, Rational Performance Tester simplifies test creation, load generation, and data
collection to help ensure that applications can scale to thousands of concurrent users.
Rational Purify - Advanced runtime and memory management error detection. Does not require access to source code and can thus be used with third-party
libraries in addition to home-grown code.
Rational Robot - General-purpose test automation tool for QA teams who want to perform functional testing of client/server applications.
Rational Test RealTime - Cross-platform solution for component testing and runtime analysis. Designed specifically for those who write code for embedded and
other types of pervasive computing products.
Mercury Interactive
Mercury is the global leader in Business Technology Optimization (BTO) software and services. Our BTO products and solutions help customers govern and
manage IT and optimize application quality, performance, and availability. Mercury enables IT organizations to shift their focus from managing IT projects to
optimizing business outcomes. Global 2000 companies and government agencies worldwide rely on Mercury to lower IT costs, reduce risks, and optimize for
growth; address strategic IT initiatives; and optimize enterprise application environments like J2EE, .NET, and ERP/CRM.
Mercury has a number of complimentary tools TestDirector being the most integrated one. They have a lot of third party support and test tools are usually
compared first against Mercury than the others. Mercury tends to use third party companies to supply professional services support for their tools (e.g. if you
require onsite development of test suites).
Mercury Website http://www.mercury.com/
Mercury Interactive Software Test Tools
Mercury TestDirector: allows you to deploy high-quality applications quickly and effectively by providing a consistent, repeatable process for gathering
requirements, planning and scheduling tests, analyzing results, and managing defects and issues. TestDirector is a single, Web-based application for all essential
aspects of test management — Requirements Management, Test Plan, Test Lab, and Defects Management. You can leverage these core modules either as a
standalone solution or integrated within a global Quality Center of Excellence environment.
Mercury QuickTest Professional: provides the industry's best solution for functional test and regression test automation - addressing every major software
application and environment. This next-generation automated testing solution deploys the concept of Keyword-driven testing to radically simplify test creation and
maintenance. Unique to QuickTest Professional’s Keyword-driven approach, test automation experts have full access to the underlying test and object properties,
via an integrated scripting and debugging environment that is round-trip synchronized with the Keyword View.
Mercury WinRunner: offers your organization a powerful tool for enterprisewide functional and regression testing. Mercury WinRunner captures, verifies, and
replays user interactions automatically, so you can identify defects and ensure that business processes work flawlessly upon deployment and remain reliable. With
Mercury WinRunner, your organization gains several advantages, including:
- Reduced testing time by automating repetitive tasks.
- Optimized testing efforts by covering diverse environments with a single testing tool.
- Maximized return on investment through modifying and reusing test scripts as the application evolves.
Mercury Business Process Testing: the industry’s first web-based test automation solution, can add real value. It enables non-technical business analysts to build,
data-drive, and execute test automation without any programming knowledge. By empowering business analysts and quality automation engineers to collaborate
more effectively using a consistent and standardized process, you can:
Improve the productivity of your testing teams.
Detect and diagnose performance problems before system downtime occurs.
Increase the overall quality of your applications.
ActiveTest. Can help ensure the users have a positive experience with a Web site. ActiveTest is a hosted, Web-based testing service that conducts full scale stress
testing of your Web site. By emulating the behavior of thousands of customers using your Web application, ActiveTest identifies bottlenecks and capacity
constraints before they affect your constomers.
Mercury LoadRunner: prevents costly performance problems in production by detecting bottlenecks before a new system or upgrade is deployed. You can verify
that new or upgraded applications will deliver intended business outcomes before go-live, preventing over-spending on hardware and infrastructure. It is the
industry-standard load testing solution for predicting system behavior and performance, and the only integrated load testing, tuning, and diagnostics solution in the
market today. With LoadRunner web testing software, you can measure end-to-end performance, diagnose application and system bottlenecks, and tune for better
performance—all from a single point of control. It supports a wide range of enterprise environments, including Web Services, J2EE, and .NET.

Segue
Segue Software is a global leader dedicated to delivering quality optimization solutions that ensure the accuracy and performance of enterprise applications. Today
Segue® solutions are successfully meeting the quality optimization challenges of more than 2,000 customers around the world, including 61% of the Fortune 100.
Our results-oriented approach helps our customers optimize quality every step of the way.
Anyone who has used SilkTest along side any of the other test tools will agree that this is the most function rich out the box. However the learning curve (if you
have no programming experience) is the steepest. In my opinion it provides the most robust facilities; an object map, test recovery facilities and object-based
development language. Segue's performance test tool SilkPerformer also performs very well compared to it's rivals e.g. LoadRunner, LoadTest, etc.
Segue Website see http://www.segue.com/.
Segue Software Test Tools
SilkCentral Test Manager - Automate your testing process for optimal quality and productivity. SilkCentral Test Manager is an all-inclusive test management system
that builds quality and productivity into the testing process to speed the delivery of successful enterprise applications. It lets you plan, document and manage each
step of the testing cycle from capturing and organizing key business requirements, tracing them through execution … designing the optimal test plans …
scheduling tests for unattended execution … tracking the progress of manual and automated tests … identifying the features at risk … and assessing when the
application is ready to go live.
SilkCentral Issue Manager - Resolve issues quickly & reliably by automating the tracking process. An estimated 80% of all software costs is spent on resolving
application defects. With SilkCentral™ Issue Manager, you can reduce the cost and speed the resolution of defects and other issues throughout the entire
application lifecycle. SilkCentral Issue Manager features a flexible, action-driven workflow that adapts easily to your current business processes and optimizes
defect tracking by automatically advancing each issue to its next stage. Its Web user interface provides 24x7x365 access to a central repository of all defect-
related information - simplifying usage among geographically dispersed groups and promoting collaboration among different departments. Meanwhile insightful
reports enable you to determine project readiness based on the status of important issues.
SilkTest - Meet the time-to-market & quality goals of enterprise applications. SilkTest is the industry-leading automated tool for testing the functionality of enterprise
applications in any environment. It lets you thoroughly verify application reliability within the confines of today's short testing cycles by leveraging the accuracy,
consistency and time-saving benefits of Segue's automated testing technology. Designed for ease of use, SilkTest includes a host of productivity-boosting features
that let both novice and expert users create functional tests quickly, execute them automatically and analyze results accurately. With less time spent testing, your
QA staff can expand test coverage and optimize application quality. In addition to validating the full functionality of an application prior to its initial release, users
can easily evaluate the impact of new enhancements on existing functionality by simply reusing existing test cases.
SilkTest International - Ensure the reliability of multi-lingual enterprise applications. When it comes to localized versions of global applications, companies
traditionally resort to second-class manual testing - a time-consuming and costly process which leaves a large margin of error. SilkTest International changes all
that by providing a quick, accurate and fully automated way to test localized applications.
SilkPerformer Component - Optimize component quality and reduce costs by testing remote application components early in development. As the central building
blocks of a distributed application, remote application components are key to ensuring application quality. SilkPerformer® Component Test Edition from Segue®
lets you test and optimize three major quality aspects of critical remote components early in the application lifecycle - even before client applications are available.
SilkPerformer - Test the limits of your enterprise applications. SilkPerformer® is the industry's most powerful - yet easiest to use - automated load and performance
testing solution for optimizing the performance, scalability and reliability of mission-critical enterprise applications. With SilkPerformer, you can accurately predict
the "breaking points" in your application and its underlying infrastructure before it is deployed, regardless of its size or complexity. SilkPerformer has the power to
simulate thousands of simultaneous users working with multiple computing environments and interacting with various application environments such as Web,
client/server, Citrix® MetaFrame®, or ERP/CRM systems - all with a single script and one or more test machines. Yet its visual approach to scripting and root-
cause analysis makes it amazingly simple and efficient to use. So you can create realistic load tests easily, find and fix bottlenecks quickly, and deliver high-
performance applications faster than ever.
SilkCentral Performance Manager - Optimize the availability, performance and accuracy of mission-critical applications. SilkCentral™ Performance Manager is an
application performance management solution for optimizing the quality of mission-critical applications. SilkCentral Performance Manager monitors the end-user
experience on three dimensions: availability, accuracy and performance. Active monitoring utilizes synthetic business transactions for service-level and
performance monitoring, while passive monitoring provides an understanding of real-user behavior by recording actual user transactions.
Facilita
Forecast by Facilita is mainly used for performance testing but functionally it is as strong as the other performance tools and the cost saving is usually at least 50%
less.
Facilita Website http://www.facilita.com/
Facilita Software Test Tools
forecast - The Load and Performance Test Tool. A non-intrusive tool for system load testing, performance measurement and multi-user functional testing. Load test
your enterprise infrastructure by simulating thousands of users performing realistic user actions.

Empirix
Empirix is the leading provider of integrated testing and management solutions for Web and voice applications and VoIP networks.
Empirix Website http://www.empirix.com/
Empirix Software Test Tools
e-TEST suite - A powerful, easy-to-use application testing solution that ensures the quality, performance, and reliability of your Web applications and Web
Services. This integrated, full lifecycle solution allows you to define and manage your application testing process, validate application functionality, and ensure that
your applications will perform under load. With e-TEST suite, you can deploy your Web applications and Web Services in less time while maximizing the efficiency
of your testing team.
e-Manager Enterprise - A comprehensive test management solution that allows you to plan, document, and manage the entire application testing process. Its
intuitive, Web-based interface and integrated management modules allow you to set up a customized testing process to fit the needs of your organization.
e-Tester - A flexible, easy-to-use solution for automated functional and regression testing of your Web applications and Web Services. It provides the fastest way to
create automated scripts that emulate complex Web transactions. e-Tester then allows you to use these scripts for automated functional and regression testing.
The same scripts can also be used in e-Load for load and performance testing and in OneSight for post-deployment application management.
e-Load - A powerful solution that enables you to easily and accurately test the performance and scalability of your Web applications and Web Services. Using e-
Load you can simulate hundreds or thousands of concurrent users, executing real business transactions, to analyze how well your Web applications will perform
under load. It also allows you to monitor the performance of your back-end application infrastructure, during your load test, to identify bottlenecks and help you
tune application performance. e-Load is fully accessible via a Web browser interface, which enables testers and developers to collaborate during the application
testing and tuning process.

OpenSTA
OpenSTA is a distributed software testing architecture designed around CORBA, it was originally developed to be commercial software by CYRANO. The current
toolset has the capability of performing scripted HTTP and HTTPS heavy load tests with performance measurements from Win32 platforms. However, the
architectural design means it could be capable of much more.
OpenSTA Website http://www.opensta.org/
The OpenSTA toolset is Open Source software licensed under the GNU GPL (General Public License), this means it is free and will always remain free.

AutoTester
AutoTester was founded in 1985 and was the first automated test tool company. Since its inception, AutoTester has continually led the automated software testing
industry with its innovative and powerful testing tools designed to help customers worldwide with e-business, SAP R/3, ERP, and Windows software quality
initiatives.
AutoTester Website http://www.autotester.com/
AutoTester Software Test Tools
AutoTester ONE - Functional, regression, and systems integration testing of Windows, Client Server, Host/Legacy, or Web applications. Provides true end-to-end
testing of applications
Parasoft
Parasoft is the leading provider of innovative solutions that automatically identify and prevent software errors from reoccurring in the development and QA process.
For more than 15 years, this privately held company has delivered easy-to-use, scalable and customizable error prevention tools and methodologies through its
portfolio of leading brands, including, Jtest, C++test, Insure++ and SOAtest.
Parasoft Website http://www.parasoft.com/
Parasoft Software Test Tools
WebKing - An automated Web application testing product that automates the most critical Web verification practices: static analysis, functional/regression testing,
and load testing.
Jtest - An automated Java unit testing and coding standard analysis product. It automatically generates and executes JUnit tests for instant verification, and allows
users to extend these tests. In addition, it checks whether code follows over 500 coding standard rules and automatically corrects violations of over 200 rules.
C++test - An automated C/C++unit testing and coding standard analysis product. It automatically generates and executes unit tests for instant verification, and
allows users to customize and extend these tests as needed. In addition, it checks whether code follows over 700 coding standard rules
SOAtest - An automated Web services testing product that allows users to verify all aspects of a Web service, from WSDL validation, to unit and functional testing
of the client and server, to performance testing. SOA Test addresses key Web services and SOA development issues such as interoperability, security, change
management, and scalability.
.TEST - An automated unit testing and coding standard analysis product that tests classes written on the Microsoft® .NET Framework without requiring developers
to write a single test case or stub.

winperl.com
winperl.com is a small company with a simple nice product called WinPerl++/GUIdo.
WinPerl Website http://www.winperl.com/
winperl.com Software Test Tools
WinPerl++/GUIdo - A suite of tools written to solve the need for a Windows UI automation tool with an easy to learn and use scripting language. For this reason,
the Perl programming language was chosen. Winperl is a collection of windows dll's, Perl modules, developer UI and related tools which make its use in various
environments possible. The WinPerl++ a.k.a GUIdo tool suite is ideal for the following purposes:
- Windows UI application automation.
- Corporate SQA application test efforts.
- Ideal for automating repetitive tasks.
- IT functions to eliminate human interaction.

Dynamic Memory Solutions


DMS software is focused on Quality Tools for Software Development. DMS is headquartered in Connecticut, USA and is privately owned.
Website http://www.dynamic-memory.com/
Dynamic Memory Solutions Software Test Tools
Dynamic Profile - The Dynamic Profile delivers the Unix Performance information your staff needs to keep your software running fast. All too often, production jobs
run slower than predicted for some reason. A storm of conjecture and blames ensues regarding the diagnosis of the problem. Is it the new patch, the database, the
network, a disk, ...? With the Dynamic Profile your operations staff can gather key information to diagnose the problem.
Dynamic Profile - The Dynamic Profile delivers the Unix Performance information your staff needs to keep your software running fast. All too often, production jobs
run slower than predicted for some reason. A storm of conjecture and blames ensues regarding the diagnosis of the problem. Is it the new patch, the database, the
network, a disk, ...? With the Dynamic Profile your operations staff can gather key information to diagnose the problem.
Dynamic Leak Check for UNIX - Dynamic Memory Solutions offers a complete solution to memory leak problems in the UNIX environment. Primarily targeted at
medium to large software companies, Dynamic Leak Check makes leak detection easy and fast. Your programmers, testers and operations team can improve
software quality, lower support costs, and increase customer satisfaction simply by using Dynamic Leak Check. The tool is so effective that our customers often
experience an immediate total Return On Investment.
Dynamic Code Coverage - The Dynamic Code Coverage user guide may be downloaded by clicking on this link: User Guide Dynamic Code Coverage allows your
team to identify untested portions of your code. This measurement leads to more effective testing. Effective testing a cornerstone of software quality. By using,
Dynamic Code Coverage your team will improve your software quality immediately.
Dynamic Debug - Dynamic Debug identifies and pinpoints a wide variety of memory errors including:
Overflow errors on heap, stack, global and shared memory.
New/delete, malloc/free errors. C/C++ has a large variety of errors associated with heap memory interfaces.
Interface errors to standard libraries e.g. strcpy, printf
Uninitialized memory access
Accessing memory after free/delete
Null Pointer problems
I/O problems

Operative Software Products


In 1992, Operative Software Products was founded, with a goal to provide products which aide those most involved in the operational aspects of computer
systems. That is how the name Operative was chosen. At the time newly networked PC systems were beginning to require enterprise solutions comparable to
mainframe solutions.
Website http://www.operativesoft.com/
AccordSQA
Leading the revolution in business process optimization through true-automation in quality assurance. Break through the programming wall with next-generation
test automation technology and accelerate your Time-to-Quality™.
AccordSQA Website http://www.accordsqa.com/
AccordSQA Software Test Tools
SmarteScript - First functional and regression testing tool built on next-generation technology. Powered by the patent pending Grid-Visualization Engine™,
SmarteScript empowers the AccordSQA user with the ability to incorporate the business logic of their application directly into the testing process through Smarte
Process Optimization™ – the true test of software quality and performance. The result is the highest level of quality assurance for the most complex of software
applications. Simply comprehensive testing.

Candela Technologies
Candela Technologies provides powerful and affordable Ethernet traffic generation equipment featuring large port counts in a compact form factor.
Candela Website http://www.candelatech.com/
Candela Technologies Software Test Tools
LANforge FIRE - LANforge now has better support of Microsoft Windows operating systems. Please see the bottom of this file for details on the supported
features. LANforge on Linux is still the most precise, featureful, and highest performing option.

Seapine Software
Seapine's software development and testing tools streamline your development process, saving you significant time and money. Enjoy feature-rich tools that are
flexible enough to work in any software development environment. With Seapine integrated tools, every step in the development process feeds critical information
into the next step, letting you focus on developing high quality software in less time.
Seapine Website http://www.seapine.com/
Seapine Software Software Test Tools
QA Wizard - Incorporating a user friendly interface with integrated data and a robust scripting engine. It adapts easily to new and evolving technologies and can be
mastered in a short period of time. The result--you deliver higher quality software faster.

Part II - Bug Tracking Software


RMTrack - http://www.rmtrack.com/ - Provides a powerful set of features for managing your issue tracking process. Some highlights of the RMTrack application:

• Web based access allows your users to access the database from anywhere.
• Available as a hosted solution or a download for local installation, whichever suits your needs best.
• Completely customizable issue fields, workflows, data entry forms, user groups and projects let you manage your data, your way.
• Carefully designed to be user friendly and intuitive so there is little or no training required for end users. Each screen has context sensitive help and full
user guides are available from every help page.
• Integrated screen capture tool allows for easy one-click capture of screen shots

BugTracker from Applied Innovation Management - http://www.bugtracking.com/ - The complete bug tracking software solution for bug, defect, feature and
request tracking. Your one stop shop solution for web based bug tracking software. With 12 years history, over 500 installed sites, a million users and a 30 day
money back guarantee you can't go wrong! Call us for a demo today!
DefectTracker from Pragmatic Software - http://www.defecttracker.com - Defect Tracker is a fully web-based defect tracking and support ticket system that
manages issues and bugs, customer requirements, test cases, and allows team members to share documents.
PR-Tracker from Softwise Company - http://www.prtracker.com/ - PR-Tracker is an enterprise level problem tracking system designed especially for bug
tracking. PR-Tracker is easy to use and setup. It has a sensible default configuration so you can begin tracking bugs right away while configuring the software on
the fly to suit your special needs. Features classification, assignment, sorting, searching, reporting, access control, user permissions, attachments, email
notification and much more. Softwise Company
TestTrack Pro from Seapine Software - http://www.seapine.com/ttpro.html - TestTrack Pro delivers time-saving issue management features that keep all team
members informed and on schedule. Its advanced configurability and scalability make it the most powerful solution at the best value. Move ahead of your
competition by moving up to TestTrack Pro.
Bugzilla from Mozilla Organization - http://www.bugzilla.com/ - Bugzilla is a "Defect Tracking System" or "Bug-Tracking System". Defect Tracking Systems allow
individual or groups of developers to keep track of outstanding bugs in their product effectively. Most commercial defect-tracking software vendors charge
enormous licensing fees. Despite being "free", Bugzilla has many features its expensive counterparts lack. Consequently, Bugzilla has quickly become a favorite of
hundreds of organizations across the globe.
BugCollector - http://www.nesbit.com/ - BugCollector Pro 3.0 is a multiuser database specifically designed for keeping track of software bugs and feature
requests. With it, you can track bugs from first report through resolution and feature requests from initial contact through implementation.
ProblemTracker - http://www.netresultscop.com/fs_pbtrk_info.html - ProblemTracker is a powerful, easy-to-use Web-based tool for defect tracking and change
management problemTracker delivers the benefits of automated bug tracking to any desktop in a familiar Web browser interface, as a price every organization can
afford.
ClearQuest - http://www.rational.com/ - ClearQuest is flexible defect tracking/change request management system for tracking and reporting on defects.
SWBTracker - http://www.softwarewithbrains.com/suntrack.htm - SWBTracker supports concurrent multiuser licensing at extremely competitive price, as well as
many of the most important festures, developers and testers are looking for in today's bug tracking spftware: automatic email notifications with customizable
message templates , complete issue life cycle tracking with automaticchange history logging, custom report designer, and many built-in summary and detail
reports.
Elementool - http://www.elementool.com/ - Elementool is an application service provider for Web-based software bug tracking and support management tools.
Elementool provides its services to software companies and business Web sites all over the world.
Part III - Software Test Automation Tool Evaluation Criteria
Ease of Use

• Learning curve
• Easy to maintain the tool
• Easy to install--tool may not be used if difficult to install

Tool Customization

• Can the tool be customized (can fields in tool be added or deleted)?


• Does the tool support the required test procedure naming convention?

Platform Support

• Can it be moved and run on several platforms at once, across a network (that is, cross-Windows support, Win95, and WinNT)?

Multiuser Access

• What database does the tool use? Does it allow for scalability?
• Network-based test repository--necessary when multiple access to repository is required

Defect Tracking
• Does the tool come with an integrated defect-tracking feature?

Tool Functionality

• Test scripting language--does the tool use a flexible, yet robust scripting language? What is the complexity of the scripting language: Is it 4 GL? Does it
allow for modular script development?
• Complexity of scripting language
• Scripting language allows for variable declaration and use; allows passing of parameters between functions
• Does the tool use a test script compiler or an interpreter?
• Interactive test debugging--does the scripting language allow the user to view variable values, step through the code, integrate test procedures, or jump
to other external procedures?
• Does the tool allow recording at the widget level (object recognition level)?
• Does the tool allow for interfacing with external .dll and .exe files?
• Published APIs--language interface capabilities
• ODBC support--does the tool support any ODBC-compliant database?
• Is the tool intrusive (that is, does source code need to be expanded by inserting additional statements)?
• Communication protocols--can the tool be adapted to various communication protocols (such as TCP/IP, IPX)?
• Custom control support--does the tool allow you to map to additional custom controls, so the tool is still compatible and usable?
• Ability to kick off scripts at a specified time; scripts can run unattended
• Allows for adding timers
• Allows for adding comments during recording
• Compatible with the GUI programming language and entire hardware and software development environment used for application under test (i.e., VB,
Powerbuilder)
• Can query or update test data during playback (that is, allows the use of SQL statements)
• Supports the creation of a library of reusable function
• Allows for wrappers (shells) where multiple procedures can be linked together and are called from one procedure
• Test results analysis--does the tool allow you to easily see whether the tests have passed or failed (that is, automatic creation of test results log)?
• Test execution on script playback--can the tool handle error recovery and unexpected active windows, log the discrepancy, and continue playback
(automatic recovery from errors)?
• Allows for synchronization between client and server
• Allows for automatic test procedure generation
• Allows for automatic data generation

Reporting Capability

• Ability to provide graphical results (charts and graphs)


• Ability to provide reports
• What report writer does the tool use?
• Can predefined reports be modified and/or can new reports be created?

Performance and Stress Testing

• Performance and stress testing tool is integrated with GUI testing tool
• Supports stress, load, and performance testing Allows for simulation of users without requiring use of physical workstations
• Ability to support configuration testing (that is, tests can be run on different hardware and software configurations)
• Ability to submit a variable script from a data pool of library of scripts/data entries and logon IDs/password
• Supports resource monitoring (memory, disk space, system resources)
• Synchronization ability so that a script can access a record in database at the same time to determine locking, deadlock conditions, and concurrency
control problems
• Ability to detect when events have completed in a reliable fashion
• Ability to provide client to server response times
• Ability to provide graphical results
• Ability to provide performance measurements of data loading

Version Control

• Does the tool come with integrated version control capability?


• Can the tool be integrated with other version control tools
Test Planning and Management

• Test planning and management tool is integrated with

GUI testing tool

• Test planning and management tool is integrated with requirements management tool
• Test planning and management tool follows specific industry standard on testing process (such as SEI/CMM, ISO)
• Supports test execution management
• Allows for test planning--does the tool support planning, managing, and analyzing testing efforts? Can the tool reference test plans, matrices, and
product specifications to create traceability?
• Allows for measuring test progress
• Allows for various reporting activities

Pricing

• Is the price within the estimated price range?


• What type of licensing is being used (floating, fixed)?
• Is the price competitive?

Vendor Qualifications

• Maturity of product
• Market share of product
• Vendor qualifications, such as financial stability and length of existence. What is the vendor's track record?
• Are software patches provided, if deemed necessary?
• Are upgrades provided on a regular basis?
• Customer support
• Training is available
• Is a tool Help feature available? Is the tool well documented?
• Availability and access to tool user groups

Part IV - Comparing SilkTest and WinRunner


Startup Initialization and Configuration
• SilkTest derives its initial startup configuration settings from its partner.ini file. This though is not important because SilkTest can be reconfigured at any point in
the session by either changing any setting in the Options menu or loading an Option Set. An Option Set file (*.opt) permits customized configuration settings to be
established for each test project. The project specific Option Set is then be loaded [either interactively, or under program control] prior to the execution of the
project’s testcases. The Options menu or an Option Set can also be used to load an include file (*.inc) containing the project’s GUI Declarations [discussed in
section 2.6 on page 5], along with any number of other include files containing library functions, methods, and variables shared by all testcases.
• WinRunner derives its initial startup configuration from a wrun.ini file of settings. During startup the user is normally polled [this can be disabled] for the type of
addins they want to use during the session [refer to section 2.3 on page 3 for more information about addins]. The default wrun.ini file is used when starting
WinRunner directly, while project specific initializations can be established by creating desktop shortcuts which reference a project specific wrun.ini file. The use of
customized wrun.ini files is important because once WinRunner is started with a selected set of addins you must terminate WinRunner and restart it to use a
different set of addins. The startup implementation supports the notion of a startup test which can be executed during WinRunner initialization. This allows project-
specific compiled modules [memory resident libraries] and GUI Maps [discussed in section 2.6 on page 5] to be loaded. The functions and variables contained in
these modules can then be used by all tests that are run during that WinRunner session.
Both tools allow most of the configuration setup established in these files to be over-ridden with runtime code in library functions or the test scripts.
Test Termination
• SilkTest tests terminate on exceptions which are not explicitly trapped in the testcase. For example if a window fails to appear during the setup phase of testing
[i.e. the phase driving the application to a verification point], a test would terminate on the first object or window timeout exception that is thrown after the errant
window fails to appear.
• WinRunner tests run to termination [in unattended Batch mode] unless an explicit action is taken to terminate the test early. Therefore tests which ignore this
termination model will continue running for long periods of time after a fatal error is encountered. For example if a window fails to appear during the setup phase of
testing, subsequent context sensitive statements [i.e. clicking on a button, performing a menu pick, etc.] will fail—but this failure occurs after a multi-second
object/window “is not present” timeout expires for each missing window and object. [When executing tests in non-Batch mode, that is in Debug, Verify, or Update
modes, WinRunner normally presents an interactive dialog box when implicit errors such as missing objects and windows are encountered].
Addins and Extensions
Out of the box, under Windows, both tools can interrogate and work with objects and windows created with the standard Microsoft Foundation Class (MFC) library.
Objects and windows created using a non-MFC technology [or non-standard class naming conventions] are treated as custom objects. Dealing with truly custom
objects is discussed further in section 2.8 on page 6. But objects and windows created for web applications [i.e. applications which run in a browser], Java
applications, Visual Basic applications, and PowerBuilder applications are dealt with in a special manner:
• SilkTest enables support for these technologies using optional extensions. Selected extensions are enabled/disabled in the current Option Set [or the
configuration established by the default partner.ini option settings].
• WinRunner enables support for these technologies using optional addins. Selected addins are enabled/disabled using either the Addin Manager at WinRunner
startup, or by editing the appropriate wrun.ini file prior to startup. Note that (1) some combinations of addins [WinRunner] and extensions [SilkTest] are mutually
exclusive, (2) some of these addins/extensions may no longer be supported in the newest releases of the tool, (3) some of these addins/extensions may only
support the last one or two releases of the technology [for example version 5 and 6 of Visual Basic] and (4) some of these addins and extensions may have to be
purchased at an addition cost.
Visual Recorders
SilkTest provides visual recorders and wizards for the following activities: • Creating a test frame with GUI declarations for a full application and adding/deleting
selective objects and windows in and existing GUI declarations frame file. • Capturing user actions with the application into a test case, using either context
sensitive [object relative] or analog [X:Y screen coordinate relative] recording techniques. • Inspecting identifiers, locations and physical tags of windows and
objects. • Checking window and object bitmaps [or parts thereof]. • Creating a verification statement [validating one or more object properties].
WinRunner provides visual recorders and wizards for the following activities: • Creating an entire GUI Map for a full application and adding/deleting selective
objects and windows in an existing GUI Map. It is also possible to implicitly create GUI Map entries by capturing user actions [using the recorder described next]. •
Capturing user actions with the application into a test case, using either context sensitive [object relative] or analog [X:Y screen coordinate relative] recording
techniques. • Inspecting logical names, locations and physical descriptions of windows and objects. • Checking window and object bitmaps [or parts thereof]. •
Creating a GUI checkpoint [validating one or more object properties]. • Creating a database checkpoint [validating information in a database]. • Creating a
database query [extracting information from a database]. • Locating at runtime a missing object referenced in a testcase [and then adding that object to the GUI
Map]. • Teaching WinRunner to recognize a virtual object [a bitmap graphic with functionality]. • Creating Data Tables [used to drive a test from data stored in an
Excel-like spreadsheet]. • Checking text on a non-text object [using a built-in character recognition capability]. • Creating a synchronization point in a testcase. •
Defining an exception handler. Some of these recorders and wizards do not work completely for either tool against all applications, under all conditions. For
example neither tool’s recorder to create a full GUI Map [WinRunner] or test frame [SilkTest] works against large applications, or any web application. Evaluate the
recorders and wizards of interest carefully against your applications if these utilities are important to your automated testing efforts.
Object Hierarchy
• SilkTest supports a true object-oriented hierarchy of parent-child-grandchild-etc. relationships between windows and objects within windows. In this model an
object such as a menu is the child of its enclosing window and a parent to its menu item objects.
• WinRunner, with some rare exceptions [often nested tables on web pages], has a flat object hierarchy where child objects exist in parent windows. Note that web
page frames are treated as windows, and not child objects of the enclosing window on web pages that are constructed using frames.
Object Recognition
Both of these tools use a lookup table mechanism to isolate the variable name used to reference an object in a test script from the description used by the
operating system to access that object at runtime:
• SilkTest normally places an application’s GUI declarations in a test frame file. There is generally one GUI declaration for each window and each object in a
window. A GUI declaration consists of an object identifier—the variable used in a test script—and its class and object tag definition used by the operating system to
access that object at runtime. SilkTest provides the following capabilities to define an object tag: (1) a string, which can include wildcards; (2) an array reference
which resolves to a string which can include wildcards; (3) a function or method call that returns a string, which can include wildcards, (4) an object class and class
relative index number; and (5) multiple tags [multi-tags] each optionally conditioned with (6) an OS/GUI/browser specifier [a qualification label].
• WinRunner normally places an application’s logical name/physical descriptor definitions in a GUI Map file. There is generally one logical name/physical descriptor
definition for each window and each object in a window. The logical name is used to reference the object in a test script, while the physical descriptor is used by
the operating system to access that object at runtime. WinRunner provides the following capabilities to define a physical descriptor: (1) a variable number of
comma delimited strings which can include wildcards, where each string identifies one property of the object. [While there is only a single method of defining a
physical descriptor, this definition can include a wide range and variable number of obligatory, optional, and selector properties on an object by object basis]. The
notion behind this lookup table mechanism is to permit changes to an object tag [SilkTest] or a physical descriptor [WinRunner] definition without the need to
change the associated identifier [SilkTest] or logical name [WinRunner] used in the testcase. In general the object tag [SilkTest] or physical descriptor [WinRunner]
resolve to one or more property definitions which uniquely identify the object in the context of its enclosing parent window at runtime. It is also possible with both
tools to dynamically construct and use object tags [SilkTest] or physical descriptors [WinRunner] at runtime to reference objects in test scripts.
Object Verification
Both tools provide a variety of built-in library functions permitting a user to hand code simple verification of a single object property [i.e. is/is not focused, is/is not
enabled, has/does not have an expected text value, etc.]. Complex multiple properties in a single object and multiple object verifications are supported using visual
recorders:
• SilkTest provides a Verify Window recorder which allows any combination of objects and object properties in the currently displayed window to be selected and
captured. Using this tool results in the creation, within the testcase, of a VerifyProperties() method call against the captured window.
• WinRunner provides several GUI Checkpoint recorders to validate (1) a single object property, (2) multiple properties in a single object, and (3) multiple
properties of multiple objects in a window. The single property recorder places a verification statement directly in the test code while the multiple property recorders
create unique checklist [*.ckl] files in the /chklists subdirectory [which describe the objects and properties to capture], as well as an associated expected results
[*.chk] file in the /exp subdirectory [which contains the expected value of each object and property defined in the checklist file]. Both tools offer advanced features
to define new verification properties [using mapping techniques and/or built-in functions and/or external DLL functions] and/or to customize how existing properties
are captured for standard objects.
Custom Objects
Note: The description of this feature, more than any other in this report, is limited in its scope and coverage. An entire white paper could be dedicated to exploring
and describing how each of these tools deal with custom objects. Therefore dedicate several days to evaluating how each of these tools accommodate custom
objects in your specific applications. To deal with a custom object [i.e. an object that does not map to standard class] both tools support the use of class mapping
[i.e. mapping a custom class to a standard class with like functionality], along with a variety of X:Y pixel coordinate clicking techniques [some screen absolute,
some object relative] to deal with bitmap objects, as well as the ability to use external DLL functions to assist in object identification and verification. Beyond these
shared capabilities each tool has the following unique custom object capabilities:
• SilkTest has a feature to overlay a logical grid of X rows by Y columns on a graphic that has evenly spaced “hot spots”[this grid definition is then used to define
logical GUI declarations for each hot spot]. These X:Y row/column coordinates are resolution independent [i.e. the logical reference says “give me 4th column thing
in the 2nd row”, where that grid expands or contracts depending on screen resolution].
• WinRunner has a built-in text recognition engine which works with most standard fonts. This capability can often be used to extract visible text from custom
objects, position the cursor over a custom object, etc. This engine can also be taught non-standard font types which is does understand out of the box. Both tools
offer support for testing non-graphical controls through the advanced use of custom DLLs [developed by the user], or the Extension Kit [SilkTest, which may have
to be purchased at an addition cost] and the Mercury API Function Library [WinRunner]. SilkTest and WinRunner Feature Descriptions Version 1.00 (7/6/00)
2.9 Internationalization (Language Localization)
• SilkTest supports the single-byte IBM extended ASCII character set, and its Technical Support has also indicated “that Segue has no commitment for unicode”.
The user guide chapter titled “Supporting Internationalized Applications” shows a straightforward technique for supporting X number of [single-byte IBM extended
ASCII character set] languages in a single test frame of GUI declarations.
• WinRunner provides no documentation on how to use the product to test language localized applications. Technical Support has indicated that (1) “WinRunner
supports multi-byte character sets for language localized testing…”, (2) “there is currently no commitment for the unicode character set…”, and (3) “it is possible to
convert a US English GUI Map to another language using a [user developed] phrase dictionary and various gui_* built-in functions”.
Database Interfaces
Both tools provide a variety of built-in functions to perform Structure Query Language (SQL) queries to control, alter, and extract information from any database
which supports the Open Database Connectivity (ODBC) interface.
Database Verification
Both tools provide a variety of built-in functions to make SQL queries to extract information from an ODBC compliant database and save it to a variable [or if you
wish, an external file]. Verification at this level is done with hand coding.
WinRunner also provides a visual recorder to create a Database Checkpoint used to validate the selected contents of an ODBC compliant database within a
testcase. This recorder creates side files similar to GUI Checkpoints and has a built-in interface to (1) the Microsoft Query application [which can be installed as
part of Microsoft Office], and (2) to the Data Junction application [which may have to be purchased at an addition cost], to assist in constructing and saving
complex database queries.
Data Driven Testing
Both tools support the notion of data-driven tests, but implement this feature differently:
• SilkTest’s implementation is built around an array of user defined records. A record is a data structure defining an arbitrary number of data items which are
populated with values when the array is initialized [statically or at runtime]. Non-programmers can think of an array of records as a memory resident spreadsheet
of X number of rows which contain Y number columns where each row/column intersection contains a data item. The test code, as well as the array itself, must be
hand coded. It is also possible to populate the array each time the test is run by extracting the array’s data values from an ODBC compliant database, using a
series of built-in SQL function calls. The test then iterates through the array such that each iteration of the test uses the data items from the next record in the array
to drive the test or provide expected data values.
• WinRunner’s implementation is built around an Excel compatible spreadsheet file of X number of rows which contain Y number of columns where each
row/column intersection contains a data item. This spreadsheet is referred to as a Data Table. The test code, as well as the Data Table itself, can be created with
hand coding or the use of the DataDriver visual recorder. It is also possible to populate a Data Table file each time the test is run by extracting the table’s data
values from an ODBC compliant database using a WinRunner wizard interfaced to the Microsoft Query application.. The test then iterates through the Data Table
such that each iteration of the test uses the data items from the next row in the table to drive the test or provide expected data values. Both tools also support the
capability to pass data values into a testcase for a more modest approach to data driving a test.
Restoring an Application’s Initial State
• SilkTest provides a built-in recovery system which restores the application to a stable state, referred to as the basestate, when the test or application under test
fails ungracefully. The default basestate is defined to be: (1) the application under test is running; (2) the application is not minimized; and (3) only the application’s
main window is open and active. There are many built-in functions and features which allow the test engineer to modify, extend, and customize the recovery
system to meet the needs of each application under test.
• WinRunner does not provide a built-in recovery system. You need to code routines to return the application under test to its basestate—and dismiss all orphaned
dialogs—when a test fails ungracefully.
Scripting Language
Both tools provide proprietary, interpreted, scripting languages. Each language provide the usual flow control constructs, arithmetic and logical operators, and a
variety of built-in library functions to perform such activities as string manipulation, [limited] regular expression support, standard input and output, etc. But that is
where the similarity ends: • SilkTest provides a strongly typed, object-oriented programming language called 4Test. Variables and constants may be one of 19 built-
in data types, along with a user defined record data type. 4Test supports single- and multi-dimensional dynamic arrays and lists, which can be initialized statically
or dynamically. Exception handling is built into the language [via the do… except statement].
• WinRunner provides a non-typed, C-like procedural programming language called TSL. Variables and constants are either numbers or strings [conversion
between these two types occurs dynamically, depending on usage]. There is no user defined data type such as a record or a data structure. TSL supports sparsely
populated associative single- and [pseudo] multidimension arrays, which can be initialized statically or dynamically—element access is always done using string
references—foobar[“1”] and foobar[1] access the same element [as the second access is implicitly converted to an associative string index reference]. Exception
handling is not built into the language. The only way to truly understand and appreciate the differences between these two programming environments is to use
and experiment with both of them.
Exception Handling
Both tools provide the ability to handle unexpected events and errors, referred to as exceptions, but implement this feature differently:
• SilkTest’s exception handling is built into the 4Test language—using the do… except construct you can handle the exception locally, instead of [implicitly] using
SilkTest’s default built-in exception handler [which terminates the currently running test and logs an error]. If an exception is raised within the do block of the
statement control is then immediately passed to the except block of code. A variety of built-in functions [LogError(), LogWarning, ExceptNum(), ExceptLog(), etc.]
and 4Test statements [raise, reraise, etc.] aid in the processing of trapped exceptions within the except block of code.
• WinRunner’s exception handling is built around (1) defining an exception based on the type of exception (Popup, TSL, or object) and relevant characteristics
about the exeception (most often its error number); (2) writing an exception hander, and (3) enabling and disabling that exception handler at appropriate point(s) in
the code. These tasks can be achieved by hand coding or through the use of the Exception Handling visual recorder
Test Results Analysis
• SilkTest’s test results file resolves around the test run. For example if you run 3 testcases [via a test suite or SilkOrganizer] all of the information for that test run
will be stored in a single results file. There is a viewer to analyze the results of the last test run or X previous to the last run. Errors captured in the results file
contain a full stack trace to the failing line of code, which can be brought up in the editor by double-clicking on any line in that stack trace.
• WinRunner’s test results file revolves around each testcase. For example if you run 3 testcases [by hand or via a batch test or TestDirector] 3 test results files are
created, each in a subdirectory under its associated testcase. There is a viewer to analyze the results of a test’s last run or if results have not been deleted, a
previous run. Double clicking on events in the log often expands that entry’s information, sometimes bringing up specialized viewers [for example when that event
is some type of checkpoint or some type of failure].
Managing the Testing Process
• SilkTest has a built-in facility, SilkOrganizer, for creating a testplan and then linking the testplan to testcases. SilkOrganizer can also be used to track the
automation process and control the execution of selected groups of testcases. One or more user defined attributes [such as “Test Category”, “Author”, “Module”,
etc.] are assigned to each testcase and then later used in testplan queries to select a group of tests for execution. There is also a modest capability to add manual
test placeholders in the testplan, and then manually add pass/fail status to the results of a full test run. SilkTest also supports a test suite, which is a file containing
calls to one or more test scripts or other test suites.
• WinRunner integrates with a separate program called TestDirector [at a substantial additional cost], for visually creating a test project and then linking WinRunner
testcases into that project. TestDirector is a database repository based application that provides a variety of tools to analyze and manipulate the various database
tables and test results stored in the repository for the project. A bug reporting and tracking tool is included with TestDirector as well [and this bug tracking tool
supports a browser based client]. Using a visual recorder, testcases are added to one or more test sets [such as “Test Category”, “Author”, “Module”, etc.] for
execution as a group. There is a robust capability for authoring manual test cases [i.e. describing of each test step and its expected results], interactively executing
each manual test, and then saving the pass/fail status for each test step in the repository. TestDirector also allows test sets to be scheduled for execution at a time
and date in the future, as well as executing tests remotely on other machines [this last capability requires the Enterprise version of TestDirector]. TestDirector is
also capable of interfacing with and executing LoadRunner test scripts as well as other 3rd party test scripts [but this later capability requires custom programming
via TestDirector APIs]. Additionally TestDirector provides API’s to allow WinRunner as well as other 3rd party test tools [and programming environments] to
interface with a TestDirector database.
External Files
When the tool’s source code files are managed with a code control tool such as PVCS or Visual SourceSafe, it is useful to understand what external side files are
created:
• SilkTest implicitly creates *.*o bytecode-like executables after interpreting the source code contained in testcases and include files [but it unlikely that most people
will want to source code control these files]. No side files are created in the course of using its recorders. SilkTest does though create an explicit *.bmp files for
storing the expected and actual captured bitmap images when performing a bitmap verification.
• WinRunner implicitly creates many side files using a variety of extensions [*.eve, *.hdr, *.asc, *.ckl, *.chk, and a few others] in a variety of implicitly created
subdirectories [/db, /exp, /chklist, /resX] under the testcase in the course of using its visual recorders as well as storing pass/fail results at runtime.
Automated Testing Detail Test Plan
Automated Testing DTP Overview
This Automated Testing Detail Test Plan (ADTP) will identify the specific tests that are to be performed to ensure the quality of the delivered product.
System/Integration Test ensures the product functions as designed and all parts work together. This ADTP will cover information for Automated testing during the
System/Integration Phase of the project and will map to the specification or requirements documentation for the project. This mapping is done in conjunction with
the Traceability Matrix document, that should be completed along with the ADTP and is referenced in this document.
This ADTP refers to the specific portion of the product known as PRODUCT NAME. It provides clear entry and exit criteria, and roles and responsibilities of the
Automated Test Team are identified such that they can execute the test.
The objectives of this ADTP are:

• Describe the test to be executed.


• Identify and assign a unique number for each specific test.
• Describe the scope of the testing.
• List what is and is not to be tested.
• Describe the test approach detailing methods, techniques, and tools.
• Outline the Test Design including:
• Functionality to be tested.
• Test Case Definition.
• Test Data Requirements.
• Identify all specifications for preparation.
• Identify issues and risks.
• Identify actual test cases.
• Document the design point

Test Identification
This ADTP is intended to provide information for System/Integration Testing for the PRODUCT NAME module of the PROJECT NAME. The test effort may be
referred to by its PROJECT REQUEST (PR) number and its project title for tracking and monitoring of the testing progress.

Test Purpose and Objectives


Automated testing during the System/Integration Phase as referenced in this document is intended to ensure that the product functions as designed directly from
customer requirements. The testing goal is to identify the quality of the structure, content, accuracy and consistency, some response times and latency, and
performance of the application as defined in the project documentation.

Assumptions, Constraints, and Exclusions


Factors which may affect the automated testing effort, and may increase the risk associated with the success of the test include:

• Completion of development of front-end processes


• Completion of design and construction of new processes
• Completion of modifications to the local database
• Movement or implementation of the solution to the appropriate testing or production environment
• Stability of the testing or production environment
• Load Discipline
• Maintaining recording standards and automated processes for the project
• Completion of manual testing through all applicable paths to ensure that reusable automated scripts are valid

Entry Criteria
The ADTP is complete, excluding actual test results. The ADTP has been signed-off by appropriate sponsor representatives indicating consent of the plan for
testing. The Problem Tracking and Reporting tool is ready for use. The Change Management and Configuration Management rules are in place.
The environment for testing, including databases, application programs, and connectivity has been defined, constructed, and verified.

Exit Criteria
In establishing the exit/acceptance criteria for the Automated Testing during the System/Integration Phase of the test, the Project Completion Criteria defined in the
Project Definition Document (PDD) should provide a starting point. All automated test cases have been executed as documented. The percent of successfully
executed test cases met the defined criteria. Recommended criteria: No Critical or High severity problem logs remain open and all Medium problem logs have
agreed upon action plans; successful execution of the application to validate accuracy of data, interfaces, and connectivity.
Pass/Fail Criteria
The results for each test must be compared to the pre-defined expected test results, as documented in the ADTP (and DTP where applicable). The actual results
are logged in the Test Case detail within the Detail Test Plan if those results differ from the expected results. If the actual results match the expected results, the
Test Case can be marked as a passed item, without logging the duplicated results.
A test case passes if it produces the expected results as documented in the ADTP or Detail Test Plan (manual test plan). A test case fails if the actual results
produced by its execution do not match the expected results. The source of failure may be the application under test, the test case, the expected results, or the
data in the test environment. Test case failures must be logged regardless of the source of the failure. Any bugs or problems will be logged in the DEFECT
TRACKING TOOL.
The responsible application resource corrects the problem and tests the repair. Once this is complete, the tester who generated the problem log is notified, and the
item is re-tested. If the retest is successful, the status is updated and the problem log is closed.
If the retest is unsuccessful, or if another problem has been identified, the problem log status is updated and the problem description is updated with the new
findings. It is then returned to the responsible application personnel for correction and test.
Severity Codes are used to prioritize work in the test phase. They are assigned by the test group and are not modifiable by any other group. The following
standard Severity Codes to be used for identifying defects are:
Table 1 Severity Codes
Severity Code Severity
Description
Number Code Name
1. Critical Automated tests cannot proceed further within applicable test case (no work around)
2. High The test case or procedure can be completed, but produces incorrect output when valid information is input.
The test case or procedure can be completed and produces correct output when valid information is input, but produces incorrect
3. Medium output when invalid information is input.(e.g. no special characters are allowed as part of specifications but when a special character is
a part of the test and the system allows a user to continue, this is a medium severity)
All test cases and procedures passed as written, but there could be minor revisions, cosmetic changes, etc. These defects do not
4. Low
impact functional execution of system
The use of the standard Severity Codes produces four major benefits:

• Standard Severity Codes are objective and can be easily and accurately assigned by those executing the test. Time spent in discussion about the
appropriate priority of a problem is minimized.
• Standard Severity Code definitions allow an independent assessment of the risk to the on-schedule delivery of a product that functions as documented
in the requirements and design documents.
• Use of the standard Severity Codes works to ensure consistency in the requirements, design, and test documentation with an appropriate level of detail
throughout.
• Use of the standard Severity Codes promote effective escalation procedures.

Test Scope
The scope of testing identifies the items which will be tested and the items which will not be tested within the System/Integration Phase of testing.
Items to be tested by Automation (PRODUCT NAME ...)
Items not to be tested by Automation(PRODUCT NAME ...)

Test Approach
Description of Approach
The mission of Automated Testing is the process of identifying recordable test cases through all appropriate paths of a website, creating repeatable scripts,
interpreting test results, and reporting to project management. For the Generic Project, the automation test team will focus on positive testing and will complement
the manual testing undergone on the system. Automated test results will be generated, formatted into reports and provided on a consistent basis to Generic project
management.
System testing is the process of testing an integrated hardware and software system to verify that the system meets its specified requirements. It verifies proper
execution of the entire set of application components including interfaces to other applications. Project teams of developers and test analysts are responsible for
ensuring that this level of testing is performed.
Integration testing is conducted to determine whether or not all components of the system are working together properly. This testing focuses on how well all parts
of the web site hold together, whether inside and outside the website are working, and whether all parts of the website are connected. Project teams of developers
and test analyst are responsible for ensuring that this level of testing is performed.
For this project, the System and Integration ADTP and Detail Test Plan complement each other.
Since the goal of the System and Integration phase testing is to identify the quality of the structure, content, accuracy and consistency, response time and latency,
and performance of the application, test cases are included which focus on determining how well this quality goal is accomplished.
Content testing focuses on whether the content of the pages match what is supposed to be there, whether key phrases exist continually in changeable pages, and
whether the pages maintain quality content from version to version.
Accuracy and consistency testing focuses on whether today’s copies of the pages download the same as yesterday’s, and whether the data presented to the user
is accurate enough.
Response time and latency testing focuses on whether the web site server responds to a browser request within certain performance parameters, whether
response time after a SUBMIT is acceptable, or whether parts of a site are so slow that the user discontinues working. Although Loadrunner provides the full
measure of this test, there will be various AD HOC time measurements within certain Winrunner Scripts as needed.
Performance testing (Loadrunner) focuses on whether performance varies by time of day or by load and usage, and whether performance is adequate for the
application.
Completion of automated test cases is denoted in the test cases with indication of pass/fail and follow-up action.
Test Definition
This section addresses the development of the components required for the specific test. Included are identification of the functionality to be tested by automation,
the associated automated test cases and scenarios. The development of the test components parallels, with a slight lag, the development of the associated
product components.

Test Functionality Definition (Requirements Testing)


The functionality to be automated tested is listed in the Traceability Matrix, attached as an appendix. For each function to undergo testing by automation, the Test
Case is identified. Automated Test Cases are given unique identifiers to enable cross-referencing between related test documentation, and to facilitate tracking and
monitoring the test progress.
As much information as is available is entered into the Traceability Matrix in order to complete the scope of automation during the System/Integration Phase of the
test.

Test Case Definition (Test Design)


Each Automated Test Case is designed to validate the associated functionality of a stated requirement. Automated Test Cases include unambiguous input and
output specifications. This information is documented within the Automated Test Cases in Appendix 8.5 of this ADTP.

Test Data Requirements


The automated test data required for the test is described below. The test data will be used to populate the data bases and/or files used by the application/system
during the System/Integration Phase of the test. In most cases, the automated test data will be built by the OTS Database Analyst or OTS Automation Test Analyst.

Automation Recording Standards


Initial Automation Testing Rules for the Generic Project:
1. Ability to move through all paths within the applicable system
2. Ability to identify and record the GUI Maps for all associated test items in each path
3. Specific times for loading into automation test environment
4. Code frozen between loads into automation test environment
5. Minimum acceptable system stability
Winrunner Menu Settings
1. Default recording mode is CONTEXT SENSITIVE
2. Record owner-drawn buttons as OBJECT
3. Maximum length of list item to record is 253 characters
4. Delay for Window Synchronization is 1000 milliseconds (unless Loadrunner is operating in same environment and then must increase appropriately)
5. Timeout for checkpoints and CS statements is 1000 milliseconds
6. Timeout for Text Recognition is 500 milliseconds
7. All scripts will stop and start on the main menu page
8. All recorded scripts will remain short; Debugging is easier. However, the entire script, or portions of scripts, can be added together for long runs once the
environment has greater stability.

Winrunner Script Naming Conventions


1. All automated scripts will begin with GE abbreviation representing the Generic Project and be filed under the Winrunner on LAB11 W Drive/Generic/Scripts
Folder.
2. GE will be followed by the Product Path name in lower case: air, htl, car
3. After the automated scripts have been debugged, a date for the script will be attached: 0710 for July 10. When significant improvements have been made to the
same script, the date will be changed.
4. As incremental improvements have been made to an automated script, version numbers will be attached signifying the script with the latest improvements: eg.
XX0710.1 XX0710.2 The .2 version is the most up-to-date

Winrunner GUIMAP Naming Conventions


1. All Generic GUI Maps will begin with XX followed by the area of test. Eg. XX. XXpond GUI Map represents all pond paths. XXEmemmainmenu GUI Map
represents all membership and main menu concerns. XXlogin GUI Map represents all XX login concerns.
2. As there can only be one GUI Map for each Object, etc on the site, they are under constant revision when the site is undergoing frequent program loads.

Winrunner Result Naming Conventions


1. When beginning a script, allow default res## name to be filed
2. After a successful run of a script where the results will be used toward a report, move file to results and rename: XX for project name, res for Test Results, 0718
for the date the script was run, your initials and the original default number for the script. Eg. XXres0718jr.1

Winrunner Report Naming Conventions


1. When the accumulation of test result(s) files for the day are formulated, and the statistics are confirmed, a report will be filed that is accessible by upper
management. The daily Report file will be as follows: XXdaily0718 XX for project name, daily for daily report, and 0718 for the date the report was issued.
2. When the accumulation of test result(s) files for the week are formulated, and the statistics are confirmed, a report will be filed that is accessible by upper
management. The weekly Report file will be as follows: XXweek0718 XX for project name, week for weekly report, and 0718 for the date the report was issued
Winrunner Script, Result and Report Repository
1. LAB 11, located within the XX Test Lab, will house the original Winrunner Script, Results and Report Repository for automated testing within the Generic Project.
WRITE access is granted Winrunner Technicians and READ ONLY access is granted those who are authorized to run scripts but not make any improvements.
This is meant to maintain the purity of each script version.
2. Winrunner on LAB11 W Drive houses all Winrunner related documents, etc for XX automated testing.
3. Project file folders for the Generic Project represent the initial structure of project folders utilizing automated testing. As our automation becomes more
advanced, the structure will spread to other appropriate areas.
4. Under each Project file folder, a folder for SCRIPT, RESULT and REPORT can be found.
5. All automated scripts generated for each project will be filed under Winrunner on LAB11 W Drive/Generic/Scripts Folder and moved to folder ARCHIVE
SCRIPTS as necessary
6. All GUI MAPS generated will be filed under Winrunner on LAB11 W Drive/Generic/Scripts/gui_files Folder.
7. All automated test results are filed under the individual Script Folder after each script run. Results will be referred to and reports generated utilizing applicable
statistics. Automated Test Results referenced by reports sent to management will be kept under the Winrunner on LAB11 W Drive/Generic/Results Folder. Before
work on evaluating a new set of test results is begun, all prior results are placed into Winrunner on LAB11 W Drive/Generic/Results/Archived Results Folder. This
will ensure all reported statistics are available for closer scrutiny when required.
8. All reports generated from automated scripts and sent to upper management will be filed under Winrunner on LAB11 W Drive/Generic/Reports Folder

Test Preparation Specifications


Test Environment
Environment for Automated Test
Automated Test environment is indicated below. Existing dependencies are entered in comments.

Environment Test System Comments


Test System/Integration Test (SIT) Cert Access via http://xxxxx/xxxxx
Production Production Access via http:// www.xxxxxx.xxx
Other (specify) Development Individual Test Environments
Hardware for Automated Test
The following is a list of the hardware needed to create production like environment:
Manufacturer Device Type
Personal Computer (486 or Higher) with monitor & required peripherals; with connectivity to internet test/production environments. Must be enabled
Various
to ADDITIONAL REQUIREMENTS.
Software
The following is a list of the software needed to create a production like environment:
Software Version (if applicable) Programmer Support
Netscape Navigator ZZZ or higher -
Internet Explorer ZZZ or higher -
Test Team Roles and Responsibilities
Test Team Roles and Responsibilities
Role Responsibilities Name
Name,
COMPANY NAME Sponsor Approve project development, handle major issues related to project development, and approve development resources
Phone
Name,
XXX Sponsor Signature approval of the project, handle major issues
Phone
Name,
XXX Project Manager Ensures all aspects of the project are being addressed from CUSTOMERS’ point of view
Phone
COMPANY NAME Manage the overall development of project, including obtaining resources, handling major issues, approving technical Name,
Development Manager design and overall timeline, delivering the overall product according to the Partner Requirements Phone
COMPANY NAME Project Provide PDD (Project Definition Document), project plan, status reports, track project development status, manage Name,
Manager changes and issues Phone
COMPANY NAME Technical Provide Technical guidance to the Development Team and ensure that overall Development is proceeding in the best Name,
Lead technical direction Phone
COMPANY NAME Back End Name,
Develop and deliver the necessary Business Services to support the PROJECT NAME
Services Manager Phone
COMPANY NAME Provide PROJECT NAME development certification, production infrastructure, service level agreement, and testing Name,
Infrastructure Manager resources Phone
COMPANY NAME Test Develops ADTP and Detail Test Plans, tests changes, logs incidents identified during testing, coordinates testing effort of Name,
Coordinator test team for project Phone
COMPANY NAME Tracker Tracks XXX’s in DEFECT TRACKING TOOL. Reviews new XXX’s for duplicates, completeness and assigns to Module Name,
Coordinator/ Tester Tech Leads for fix. Produces status documents as needed. Tests changes, logs incidents identified during testing. Phone
COMPANY NAME Automation Name,
Tests changes, logs incidents identified during testing
Enginneer Phone
Test Team Training Requirements
Automation Training Requirements
Training Requirement Training Approach Target Date for Completion Roles/Resources to be Trained
. . . .
. . . .

Automation Test Preparation

1. Write and receive approval of the ADTP from Generic Project management
2. Manually test the cases in the plan to make sure they actually work before recording repeatable scripts
3. Record appropriate scripts and file them according to the naming conventions described within this document
4. Initial order of automated script runs will be to load GUI Maps through a STARTUP script. After the successful run of this script, scripts testing all paths
will be kicked off. Once an appropriate number of PNR’s are generated, GenericCancel scripts will be used to automatically take the inventory out of the
test profile and system environment. During the automation test period, requests for testing of certain functions can be accommodated as necessary as
long as these functions have the ability to be tested by automation.
5. The ability to use Generic Automation will be READ ONLY for anyone outside of the test group. Of course, this is required to maintain the pristine
condition of master scripts on our data repository.
6. Generic Test Group will conduct automated tests under the rules specified in our agreement for use of the Winrunner tool marketed by Mercury
Interactive.
7. Results filed for each run will be analyzed as necessary, reports generated, and provided to upper management.

Test Issues and Risks


Issues
The table below lists known project testing issues to date. Upon sign-off of the ADTP and Detail Test Plan, this table will not be maintained, and these issues and
all new issues will be tracked through the Issue Management System, as indicated in the projects approved Issue Management Process
Issue Impact Target Date for Resolution Owner
COMPANY NAME test team is not in possession of market data Testing may not cover some
Beginning of Automated Testing during CUSTOMER TO
regarding what browsers are most in use in CUSTOMER target browsers used by CLIENT
System and Integration Test Phase PROVIDE
market. customers
OTHER . . .

Risks
Risks
The table below identifies any high impact or highly probable risks that may impact the success of the Automated testing process.
Risk Assessment Matrix
Difficulty of Timely Overall Threat(H, M,
Risk Area Potential Impact Likelihood of Occurrence
Detection L)
1. Unstable
Delayed Start HISTORY OF PROJECT Immediately .
Environment
2. Quality of Unit Greater delays taken by automated Dependent upon quality standards of
Immediately .
Testing scripts development group
3. Browser Issues Intermittent Delays Dependent upon browser version Immediately .
Risk Management Plan
Risk Area Preventative Action Contingency Plan Action Trigger Owner
1. Meet with Environment Group . . . .
2. Meet with Development Group . . . .
3. . . . .
Traceability Matrix
The purpose of the Traceability Matrix is to identify all business requirements and to trace each requirement through the project's completion.
Each business requirement must have an established priority as outlined in the Business Requirements Document.
They are:
Essential - Must satisfy the requirement to be accepted by the customer.
Useful - Value -added requirement influencing the customer's decision.
Nice-to-have - Cosmetic non-essential condition, makes product more appealing.
The Traceability Matrix will change and evolve throughout the entire project life cycle. The requirement definitions, priority, functional requirements, and automated
test cases are subject to change and new requirements can be added. However, if new requirements are added or existing requirements are modified after the
Business Requirements document and this document have been approved, the changes will be subject to the change management process.
The Traceability Matrix for this project will be developed and maintained by the test coordinator. At the completion of the matrix definition and the project, a copy
will be added to the project notebook.

Functional Areas of Traceability Matrix


# Functional Area Priority
B1 Pond E
B2 River E
B3 Lake U
B4 Sea E
B5 Ocean E
B6 Misc U
B7 Modify E
L1 Language E
EE1 End-to-End Testing EE
Legend:
B = Order Engine
L = Language
N = Nice to have
EE = End-to-End
E = Essential
U = Useful
Definitions for Use in Testing
Test Requirement
A scenario is a prose statement of requirements for the test. Just as there are high level and detailed requirements in application development, there is a need to
provide detailed requirements in the test development area.

Test Case
A test case is a transaction or list of transactions that will satisfy the requirements statement in a test scenario. The test case must contain the actual entries to be
executed as well as the expected results, i.e., what a user entering the commands would see as a system response.

Test Procedure
Test procedures define the activities necessary to execute a test case or set of cases. Test procedures may contain information regarding the loading of data and
executables into the test system, directions regarding sign in procedures, instructions regarding the handling of test results, and anything else required to
successfully conduct the test.

Automated Test Cases


NAME OF FUNCTION Test Case
_______________________________________________________________________________________
|Project Name/Number |Generic Project / Project Request #|Date | |
|________________________|___________________________________|__________|_____________|
|Test Case Description |Check all drop down boxes, fill in | | |
| |boxes and pop-up windows operate |Build # | |
| |according to requirements on the |__________|_____________|
| |main Pond web page. | Run # | |
|________________________|___________________________________|__________|_____________|
|Function / Module | B1.1 |Execution | |
| Under Test | | Retry # | |
|________________________|___________________________________|__________|_____________|
|Test Requirement # | |Case # |AB1.1.1(A for|
| | | | Automated) |
|________________________|___________________________________|__________|_____________|
|Written by |
|_____________________________________________________________________________________|
|Goals | Verify that Pond module functions as required |
|_______________|_____________________________________________________________________|
|Setup for Test | Access browser, Go to .. . |
|_______________|_____________________________________________________________________|
|Pre-conditions | Login with name and password. When arrive at Generic Main Menu... |
|_______________|_____________________________________________________________________|
|Step|Action|Expected Results |Pass/Fail|Actual Results if Step Fails |
|____|______|_________________________________|_______________________________________|
| |Go to |From the Generic Main Menu, | |
| | |click on the Pond gif and go to | |
| | Pond |Pond web page. Once on the Pond | |
| | and |web page, check all drop down | |
| |.. |boxes for appropriate information| |
| | |(eg Time.7a, 8a in 1 hour | |
| | |increments), fill in boxes | |
| | |(remarks allows alpha and numeric| |
| | |but no other special characters),| |
| | |and pop up windows (eg. Privacy.| |
| | |Ensure it is retrieved, has | |
| | |correct verbage and closes). | |
|____|______|_________________________________|_______________________________________|
Each automation project team needs write up an automation standards document stating the following:

• The installation configurations of the automation tool.


• How the client machines environment will be set up
• Where the network repositories, and manual test plans documents are located.
• Identify what the drive letter is that all client machines must map to.
• How the automation tool will be configured.
• Identify what Servers and Databases the automation will run against.
• Any naming standards that the test procedures, test cases and test plans will follow.
• Any recording standards and scripting standards that all scripts must follow.
• Describe what components of the product that will be tested.}

Installation Configuration
Install Step: Selection: Completed:
Installations Components Full
Destination Directory C:\sqa6
Type Of Repository Microsoft Access
Scripting Language SQA Basic only
Test Station Name Your PC Name
DLL messages Overlay all DLL's the system prompts for. Robot will not run without its own DLL's.

Client Machines Configuration


Configuration Item Setting: Notes:
This will prevent mail notification messages from interrupting your
Lotus Notes Shut down lotus notes before using robot.
scripts and it will allow robot to have more memory.
Close down all applications down (except SQA robot recorder and
Close all applications This will free up memory on the PC.
the application you are testing)
Select printer window from start menu Select File -> Server
Shut down printing
Properties Select Advance tab Un-check notify check box
Shut down printing
Bring up dos prompt Select Z drive Type CASTOFF
Network
Turn off Screensavers Select NONE or change it to 90 minutes
Set in Control Panel display application Colors - 256 Font Size -
Display Settings for PC
small Desktop 800 X 600 pixels
Map a Network drive to
Bring up explorer and map a network drive to here.
{LETTER}

Repository Creation
Item Information
Repository Name
Location
Mapped Drive Letter
Project Name
Users set up for Project Admin - no password
Sbh files used in projects scripts
Client Setup Options for the SQA Robot tool
Option Window Option Selection
Recording ID list selections by Contents
ID Menu selections by Text
Record unsupported mouse drags as Mouse click if within object
Window positions Record Object as text Auto record window size
While Recording Put Robot in background
Playback Test Procedure Control Delay Between :5000 milliseconds
Partial Window Caption On Each window search
Caption Matching options Check - Match reverse captions Ignore file extensions Ignore Parenthesis
Test Log Test log Management Output Playback results to test log All details
Update SQA repository View test log after playback
Test Log Data Specify Test Log Info at Playback
Unexpected Window Detect Check
Capture Check
Playback response Select pushbutton with focus
On Failure to remove Abort playback
Wait States Wait Pos/Neg Region Retry - 4 Timeout after 90
Automatic wait Retry - 2 Timeout after 120
Keystroke option Playback delay 100 millsec Check record delay after enter key
Error Recovery On Script command Failure Abort Playback
On test case failure Continue Execution
SQA trap Check all but last 2
Object Recognition Do not change
Object Data Test Definitions Do not change
Editor Leave with defaults
Preferences Leave with defaults

Identify what Servers and Databases the automation will run against.
This {Project name} will use the following Servers:
{Add servers}
On these Servers it will be using the following Databases:
{Add databases}

Naming standards for test procedures, cases and plans


The naming standards for this project are:

Recording standards and scripting standards


In order to ensure that scripts are compatible on the various clients and run with the minimum maintenance the following recording standards have been set for all
scripts recorded.

1. Use assisting scripts to open and close applications and activity windows.
2. Use global constants to pass data into scripts and between scripts.
3. Make use of main menu selections over using double clicks, toolbar items and pop up menus whenever possible.
4. Each test procedure should have a manual test plan associated with it.
5. Do not Save in the test procedure unless it is absolutely necessary, this will prevent the need to write numerous clean up scripts.
6. Do a window existence test for every window you open, this will prevent scripts dying from slow client/server calls.
7. Do not use the mouse for drop down selections, whenever possible use hotkeys and the arrow keys.
8. When navigating through a window use the tab and arrow keys instead of using a mouse, this will make maintenance of scripts due to UI changes easier in the
future.
9. Create a template header file called testproc.tpl. This file will insert template header information on the top of all scripts recorded. This template area can be
used for modification tracking and commenting on the script.
10. Comment all major selections or events in the script. This will make debugging easier.
11. Make sure that you maximize all MDI main windows in login initial scripts.
12. When recording make sure you begin and end your scripts in the same position. Ex. On the platform browser always start your script opening the browser tree
and selecting your activity (this will ensure that the activity window will always be in the same position), likewise always end your scripts with collapsing the
browser tree.

Describe what components of the product that will be tested.


This project will test the following components:
The objective is to:
Why are there Bugs?
Since humans design and program hardware and software, mistakes are inevitable. That's what computer and software vendors tell us, and it's partly true. What
they don't say is that software is buggier than it has to be. Why? Because time is money, especially in the software industry. This is how bugs are born: a software
or hardware company sees a business opportunity and starts building a product to take advantage of that. Long before development is finished, the company
announces that the product is on the way. Because the public is (the company hopes) now anxiously awaiting this product, the marketing department fights to get
the goods out the door before that deadline, all the while pressuring the software engineers to add more and more features. Shareholders and venture capitalists
clamor for quick delivery because that's when the company will see the biggest surge in sales. Meanwhile, the quality-assurance division has to battle for sufficient
bug-testing time.

bug
A fault in a program which causes the program to perform in an unintended or unanticipated manner.

Defect:
Anything that does not perform as specified. This could be hardware, software, network, performance, format, or functionality.

Defect risk
The process of identifying the amount of risk the defect could cause. This will assist in determining if the defect can go undetected into implementation.

Defect log:
A log or database of all defects that were uncovered during the testing and maintenance phase of development. It categorizes defects into severity and similarity in
an attempt to identify areas requiring special attention.

What is the difference between a bug, a defect, and an error?


an error:
A human action that produces an incorrect result.
Programmatically mistake leads to error.

bug:
An informal word describing any of the above.
Deviation from the expected result.
A software bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from working as intended, or produces an incorrect result. Bugs
arise from mistakes and errors, made by people, in either a program's source code or its design. It is said that there are bugs in all useful computer programs, but
well-written programs contain relatively few bugs, and these bugs typically do not prevent the program from performing its task. A program that contains a large
number of bugs, and/or bugs that seriously interfere with its functionality, is said to be buggy. Reports about bugs in a program are referred to as bug reports, also
called PRs (problem reports), trouble reports, CRs (change requests), and so forth.

Defect:
Problem in algorithm leads to failure.
A defect is for something that normally works, but it has something out-of-spec.

Bug Impacts
Low impact
This is for Minor problems, such as failures at extreme boundary conditions that are unlikely to occur in normal use, or minor errors in layout/formatting. These
problems do not impact use of the product in any substantive way.

Medium impact
This is a problem that a) Effects a more isolated piece of functionality. b) Occurs only at certain boundary conditions. c) Has a workaround (where "don't do that"
might be an acceptable answer to the user). d) Occurs only at one or two customers. or e) Is very intermittent

High impact
This should be used for only serious problems, effecting many sites, with no workaround. Frequent or reproducible crashes/core dumps/GPFs would fall in this
category, as would major functionality not working.

Urgent impact
This should be reserved for only the most catastrophic of problems. Data corruption, complete inability to use the product at almost any site, etc. For released
products, an urgent bug would imply that shipping of the product should stop immediately, until the problem is resolved.

Error rate:
The mean time between errors. This can be a statistical value between any errors or it could be broken down into the rate of occurrence between similar errors.
Error rate can also have a perception influence. This is important when identifying the "good-enough" balance of the error. In other words, the mean time between
errors is greater than the ultimate user will accept.

Issue log:
A log kept of all issues raised during the development process. This could contain problem uncoverd, impact of changes to specifications or the loss of a key
individual. It is anything that must be tracked and monitored.

Priority
Priority is Business.
Priority is a measure of importance of getting the defect fixed as governed by the impact to the application, number of users affected, and company's reputation,
and/or loss of money.

Priority levels:

• Now: drop everything and take care of it as soon as you see this (usually for blocking bugs)
• P1: fix before next build to test
• P2: fix before final release
• P3: we probably won’t get to these, but we want to track them anyway

Priority levels

1. Must fix as soon as possible. Bug is blocking further progress in this area.
2. Should fix soon, before product release.
3. Fix if time; somewhat trivial. May be postponed.

Priority levels

• High: This has a major impact on the customer. This must be fixed immediately.
• Medium: This has a major impact on the customer. The problem should be fixed before release of the current version in development, or a patch must
be issued if possible.
• Low: This has a minor impact on the customer. The flaw should be fixed if there is time, but it can be deferred until the next release.

Severity
Severity is Technical.
Severity is a measure of the impact of the defect on the overall operation of the application being tested.

Severity level:
The degree of impact the issue or problem has on the project. Severity 1 usually means the highest level requiring immediate attention. Severity 5 usually
represents a documentation defect of minimal impact.

Severity is levels:
• Critical: the software will not run
• High: unexpected fatal errors (includes crashes and data corruption)
• Medium: a feature is malfunctioning
• Low: a cosmetic issue

Severity levels

1. Bug causes system crash or data loss.


2. Bug causes major functionality or other severe problems; product crashes in obscure cases.
3. Bug causes minor functionality problems, may affect "fit anf finish".
4. Bug contains typos, unclear wording or error messages in low visibility fields.

Severity levels

• High: A major issue where a large piece of functionality or major system component is completely broken. There is no workaround and testing cannot
continue.
• Medium: A major issue where a large piece of functionality or major system component is not working properly. There is a workaround, however, and
testing can continue.
• Low: A minor issue that imposes some loss of functionality, but for which there is an acceptable and easily reproducible workaround. Testing can
proceed without interruption.

Severity and Priority


Priority is Relative: the priority might change over time. Perhaps a bug initially deemed P1 becomes rated as P2 or even a P3 as the schedule draws closer to the
release and as the test team finds even more heinous errors. Priority is a subjective evaluation of how important an issue is, given other tasks in the queue and the
current schedule. It’s relative. It shifts over time. And it’s a business decision.
Severity is an absolute: it’s an assessment of the impact of the bug without regard to other work in the queue or the current schedule. The only reason severity
should change is if we have new information that causes us to re-evaluate our assessment. If it was a high severity issue when I entered it, it’s still a high severity
issue when it’s deferred to the next release. The severity hasn’t changed just because we’ve run out of time. The priority changed.

Severity Levels can be defined as follow:


S1 - Urgent/Showstopper. Like system crash or error message forcing to close the window.
Tester's ability to operate the system either totally (System Down), or almost totally, affected. A major area of the users system is affected by the incident and it is
significant to business processes.

S2 - Medium/Workaround. Exist like when a problem is required in the specs but tester can go on with testing. Incident affects an area of functionality but there is a
work-around which negates impact to business process. This is a problem that:
a) Affects a more isolated piece of functionality.
b) Occurs only at certain boundary conditions.
c) Has a workaround (where "don't do that" might be an acceptable answer to the user).
d) Occurs only at one or two customers. or is intermittent

S3 - Low. This is for minor problems, such as failures at extreme boundary conditions that are unlikely to occur in normal use, or minor errors in
layout/formatting. Problems do not impact use of the product in any substantive way. These are incidents that are cosmetic in nature and of no or very low impact
to business processes.
Triage Meetings (Bug Councils)
Bug Triage Meetings are project meetings in which open bugs are divided into categories.

Categories for software bugs


1. bugs to fix now
2. bugs to fix later
3. bugs we'll never fix

Bug Analyzing and Reproduction Tips


To reproduce an environment-dependent error, both the exact sequence of activities and the environment conditions (e.g. operating system, browser version , add-
on components, database server, Web server, third-party components, client-server resources, network bandwidth and traffic, etc.) in which the application
operations must be replicated.
Environment-independent errors on the other hand are easier to reproduce -- they do not require replicating the operating environment. With environment-
independent errors, all that need to be replicated are the steps that generate the error.

Browser Bug Analyzing Tips

• Check if the client operating system(OS) version and patches meet system requirements.
• Check if the correct version of the browser is installed on the client machine.
• Check if the browser is properly installed on the matche.
• Check the browser settings.
• Check with different browsers (e.g., Netscape Navigator versus internet Explorer).
• Check with different supported versions of the same browsers(e.g.3.1,3.2,4.2,4.3, etc).

Equivalence Class Partitiong and Boundary Condition Analysis


Equivalence class partitioning is a timesaving practice that identifies tests that are equivalent to one another; when two inputs are equivalent, you expect them to
cause the identical sequence of operations to take place or they cause the same path to be executed through the code. When two or more test cases are seen as
equivalent, the rresource savings associated with not running the redundant tests normally outweighs the rsik.
An example of an equivalence class includes the testj g of a data-entry field in an HTML form. If the field accepts a five-digit ZIP code(e.g, 22222) then it can
reasonably be assumed that field will accept all other five-digit ZIP codes (e.g. 33333, 44444, etc.)
In equivalence partitioning, both valid and invalid values are treated in this manner. For example, if entering six letters into the ZIP code field just described results
in an error message, then it can reasonably be assumed that all six-letter conbinations will result in the same error message. Similarly, if entering a four-digit
number inti the ZIP code field results in an error message, then it should be assumed that all four digit combinations will result in the same error message
EXAMPLES OF EQUIVALENCE CLASSES

• Ranges of numbers (such as all numbers between 10 and 99, which are of the same two-digit equivalence class)
• Membership in groups (dates, times, country names, ete.)
• Invalid inputs (placing symbols into text-only fields, etc)
• Equivalent output events (variation of inputs that produce the same output)
• Equivalent operating environments
• Repetition of activities
• Number of records in a database (or other equivalent objects)
• Equivalent sums or other arithmetic results
• Equivalent numbers of items entered (such as the number of characters enterd into a field)
• Equivalent space (on a page or on a screen)
• Equivalent amount of memory, disk space, or other resources available to a program.

Boundary values mark the transition points between equivalence clases. They can be limit values that define the line between supported inputs and nonsupported
inputs,or they can define the line between supported system requirements and nonsupported system requirements. Applications are more susceptible to errors at
the boundaries of equivalence classs, so boundary condition tests can be quite effective at uncovering errors.
Generally, each equivalence class is partitioned by its boundary vakues. Nevertheless, not all equivalence classs have boundaries. For example, given the
following four browser equivalent classes(NETSCAPE 4.6.1 and Microsoft Internet Explorer 4.0 and 5.0), thereis no boundary defined among each class.
Each equivalence class represents potential risk. Under the equivalent class approach to developing test cases, at most, nine test cases should be executed
against each partition.

Rules for bug level


Rules for bug level will be determined by the project goals and the project stakeholders. For example, a software product's graphical user interface is very
important in the market competition, so inconsistencies in the GUI more important than missing functionality
Critical: Error that make the program can't run.
High: Important functional can be completed, but bad output when good data is input.
Medium: Important functional can be completed and good output when good data is input, but bad output when bad data is input.
Low: Function is working, but there is some little bit problem UI problem, like wrong color, wrong text fond
Rules for bug level will be determined by the project goals and the project stakeholders. For example, a software product's graphical user interface is very
important in the market competition, so inconsistencies in the GUI more important than missing functionality
Customer impact as the single way to rank a bug because it eliminates different defintions among different folks. Customer impact is customer impact. There isn't
an "impact to testing", a "marketing priority", a "customer support" priority. There is merely a customer impact. Since all of us produce software for a customer, that
is really the only field needed. It eliminates confusion in our profession as well as within the companies that each of us work for.
Believe Defect-Free Software is Possible
The average engineer acts as though defects are inevitable. Sure, they try to write good code, but when a defect is found, it's not a surprise. No big deal, just add
it to the list of bugs to fix. Bugs in other people's code are no surprise either. Because typical engineers view bugs as normal, they aren't focused on preventing
them.
The defect-free engineers, on the other hand, expect their code to have no defects. When a (rare) bug is found, they are very embarrassed and horrified. When
they encounter bugs in other people's code, they are disgusted. Because the defect-free engineers view a bug as a public disgrace, they are very motived to do
whatever it takes to prevent all bugs.
In short, the defect-free engineers, who believe defect-free software is possible, have vastly lower defect rates than the typical engineer, who believes bugs are a
natural part of programming. The defect-free engineers have a markedly higher productivity.
In software quality, you get what you believe in!
Think Defect-Free Software is Important
Why is defect-free software important?
Delivering defect-free software reduces support costs.
Delivering defect-free software reduces programming costs.
Delivering defect-free software reduces development time.
Delivering defect-free software can provide a competitive advantage.

Commit to Delivering Defect-Free Software


Making a firm commitment to defect-free code and holding to that commitment, in spite of schedule and other pressures, is absolutely necessary to producing
defect-free code. As a nice side benefit, you will see improved schedules and reduced costs!

Design Your Code for Simplicity and Reliability


After attitude and commitment, program design and structure have the biggest impact on defect-free code. A clean, well structured design simplifies producing
reliable code. A poor design cripples the engineer, and will make it impossible to achieve defect-free code.
Each function should be precise -- it should have only one purpose. Each action or activity should be implemented in exactly one place. When programs are
structured this way, the engineer can easily find the right place to make a change. In the unlikely event that a bug is discovered in testing, the engineer can go
directly to the code with the defect and promptly correct it. This saves time and is the major cause of the faster schedules experienced with Defect-Free Software.
In addition to designing for clarity, it's important to keep the defect-free goal in mind. You want to choose designs that will be least likely to have bugs. In other
words, avoid tricky code. Don't start to optimize code unless you are sure there is a performance problem.

Trace Every Line of Code When Written


As each line of code is about to be executed, you should try to predict what the effect will be -- what data will be changed, which path a conditional will follow, etc.
If you can't predict what the effect will be, then you don't understand the program you are working on -- a very dangerous situation. If you don't predict correctly,
you have probably discovered a problem that should be addressed.
Tracing all new code shows: Code that hasn't been tested. By stepping through each line of code, you ensure that the new code is fully tested.
Review Code by Programmer Peers
Peer code reviews have consistently been shown to be the single most cost-effective way of removing bugs from code. The process of explaining a new section of
code to another engineer and persuading that second engineer the code is defect-free has several positive impacts:
Exposes the design and implementation, with benefits similar to tracing the code.
Forces the engineer to articulate assumptions. About ten percent of our code reviews are stopped in progress as the authoring engineer suddenly says, "Oops!
Never mind!" because he suddenly realized that he had made an invalid assumption. (The review later resumes with the revised code.)
Allows more than one engineer to look at the code while it can still be easily changed. Code will be made simpler and easier to understand.
Encourages cross-training and sharing of techniques. By discussing design strategies and implementation techniques, each engineer learns from the experience
of their peers.
Peer code reviews seem to work best. Code reviews done by managers or senior technical staff can have some of the same benefits, but sometimes are less
effective due to the interpersonal dynamics.

Build Automated QA into Your Code


Obviously, to build defect-free code, you have to be able to test your code. In addition to including a testing plan/strategy into the implementation, you should
design specific code to provide for full, automated testability.
The most effective testing we use is fully automated or regression testing. This is a series of fully automated tests that are run after each build of a program. The
tests are designed to exercise every part of the program, and produce a success/failure report. The idea is to use the power of the computer to make sure that the
program hasn't been adversely affected by a change.
If the design is well structured, most changes should not have side effects. The purpose of these automated tests is to provide insurance that the coding
assumptions are valid, and that everything else still works. By making the tests completely automated, they can be run frequently and provide prompt feedback to
the engineer.
If tests are run by manually testing the program, we have the chance of human error missing a problem. Manual testing is also very expensive, usually too
expensive to run after every change to a program.
There are a number of commercial testing tools available which are designed to help you automate your testing, particularly in GUI environments such as
Windows. Although they are no doubt better than manual testing, we have not found them to be effective, for a number of reasons.
By building support for automated testing into your program, you can approach 100% automated testing. Without this customized, built-in testability, you will be
lucky to achieve 35% automated testing, even with the best commercial QA testing tool. We recommend that you budget five percent of total engineering time to
creating support for automated QA testing.
Of course, each new piece of code should have a corresponding set of tests, added at the same time as the code is added, for the automated QA suite.
In order for fully automated execution of testing to be of value, the tests that are automatically executed and checked must cover the software fully. To the extent
that they don't, running the tests doesn't tell you anything about the part of your software that wasn't exercised by the testing. (This is true for all testing, whether
automated or manual.)
Build and Test Daily
Once you have a fully automated test suite, you should run it after every build. This gives developers feedback about the changes they are making, and it gives
management clear, objective feedback about the project status.
Clear, objective feedback about project status help managers make better estimates and plans. This feedback can help you identify and address problems while
you still have time to do something about them. In addition, this clear, objective feedback puts managers in a better position to provide correct feedback to their
managers (or shareholders). Finally, this objective feedback helps managers decide when a project can be shipped or deployed.
The more prompt the feedback to the programmers, the more useful it is. The shorter the time between the creation of a defect and its discovery, the easier it is for
the programmer to understand just what they have done wrong. Prompt feedback of failing tests can work as a kind of positive reinforcement for development
techniques that work and negative reinforcement for techniques that don't.
By automating the build process as well, you can schedule builds of your system daily. By building daily, you will maximize the feedback to both your programmers
and your management.

Use Automated Checking Wherever Possible


There are a lot of existing tools that can be used to find errors in your code in an automatic or semiautomatic manner. Your programmers should be using these
tools wherever possible.
These tools should be used in addition to the clean design, rather than instead of. No matter how much you use automated checking tools, using these tools alone
will never turn poorly designed, buggy code into defect-free code. You can however, find a lot of bugs that would otherwise take much more time and effort to find
and fix.

Defect Tracking Objectives

1. Provide the ability to track defects/problems


2. Provide a defect tracking database
3. Provide project-level data entry support
4. Provide defect tracking/problem reporting workflow management
5. Provide standardized and custom query/reporting capabilities
6. Provide integration to software Version Management system
7. Provide integration to Help Desk system
8. Provide management information (cost of quality) and operational information (support project level testing process)
9. Facilitate communication among testers/developers, the help desk, and management

Defect Tracking Guidelines


A defect tracking/problem reporting system should provide:

• A mechanism for entering defects/problems which supports a team approach


• A permanent database for defect tracking/problem reporting
• A simple point and click interface for entering data and generating reports
• A defect tracking workflow
• An audit trail
• Control linkages

Defect/problem documentation
A defect tracking/problem reporting system should provide:

• Standardized
• Inputs
• Expected Results
• Actual Results
• Anomalies
• Date
• Time
• Procedure Step
• Environment
• Attempts To Repeat
• Testers
• Observers
• Non-Standardized
• Defect ID
• Priority
• Severity
• Test Cycle
• Test Procedure
• Test Case
• Occurrences
• Test Requirement
• Person Reporting
• Defect Status
• Defect Action
• Defect Description
• Defect Symptom
• Found In Build
• Software Module
• Module Description
• Related modules
• Person Assigned
• Date Assigned
• Estimated Time to Fix
• Resolution
• Resolution Description
• Fix Load Date
• Fix Load Number
• Repaired in Build
• Date Closed
• Contact Person
• Attachments
• Rework Cycles
• Owner
• Work Around
• Person Investigating
• Emergence/Scheduled
• Programming Time
• Process or Product
• Customized defect/problem reporting data fields
• ACD capability
• Predefined queries/reports
• ustom query/reporting
• Free text searching
• Cut and paste
• On-screen report display
• Printed reports
• Support all Network types
• Provide Record Locking
• Provide data recovery
• Support for dial-in access
• An interface to the E-mail system
• Manual notification
• Automatic notification of team members
• Password protection of team members
• Limited access to functions based on user type

Defect Tracking/Problem Reporting Issues.

• How do we manage defects and problems


• How do we track defect trends in development projects
• How do we manage and track workflow?
• Communicate the changes that must be made to the developer(s) assigned
• Communicate that the change is completed and what code was changed to QA
• Control retest and rework cycles
• How do we know when a problem has been resolved?
• How do we know when the software is ready for release
• What data is required to support defect tracking and problem reporting
• Single database?
• Multiple databases?
• How do we integrate defect tracking/problem reporting data from multiple sources

How to Write a Fully Effective Bug Report


To write a fully effective report you must:
- Explain how to reproduce the problem - Analyze the error so you can describe it in a minimum number of steps.
- Write a report that is complete and easy to understand.

Write bug reports immediately; the longer you wait between finding the problem and reporting it, the more likely it is the description will be incomplete, the problem
not reproducible, or simply forgotten.
Writing a one-line report summary (Bug's report title) is an art. You must master it. Summaries help everyone quickly review outstanding problems and find
individual reports. The summary line is the most frequently and carefully read part of the report. When a summary makes a problem sound less severe than it is,
managers are more likely to defer it. Alternatively, if your summaries make problems sound more severe than they are, you will gain a reputation for alarmism.
Don't use the same summary for two different reports, even if they are similar. The summary line should describe only the problem, not the replication steps. Don't
run the summary into the description (Steps to reproduce) as they will usually be printed independently of each other in reports.
Ideally you should be able to write this clearly enough for a developer to reproduce and fix the problem, and another QA engineer to verify the fix without them
having to go back to you, the author, for more information. It is much better to over communicate in this field than say too little. Of course it is ideal if the problem is
reproducible and you can write down those steps. But if you can't reproduce a bug, and try and try and still can't reproduce it, admit it and write the report anyway.
A good programmer can often track down an irreproducible problem from a careful description. For a good discussion on analyzing problems and making them
reproducible, see Chapter 5 of Testing Computer Software by Cem Kaner.
The most controversial thing in a bug report is often the bug Impacts: Low, Medium, High, and Urgent. Report should show the priority which you, the bug
submitter, believes to be appropriate and does not get changed.
Bug Report Components
Report number:
Unique number given to a bug.

Program / module being tested:


The name of a program or module that being tested

Version & release number:


The version of the product that you are testing.

Problem Summary:
(data entry field that's one line) precise to what the problem is.

Report Type:
Describes the type of problem found, for example it could be software or hardware bug.

Severity:
Normally, how you view the bug.
Various levels of severity: Low - Medium - High - Urgent

Environment:
Environment in which the bug is found.

Detailed Description:
Detailed description of the bug that is found

How to reproduce:
Detailed description of how to reproduce the bug.

Reported by:
The name of person who writes the report.

Assigned to developer:
The name of developer who assigned to fixed the bug.

Status:
Open:
The status of bug when it entered.
Fixed / feedback:
The status of the bug when it fixed.
Closed:
The status of the bug when verified.
(Bug can be only closed by QA person. Usually, the problem is closed by QA manager.)
Deferred:
The status of the bug when it postponed.
User error:
The status of the bug when user made an error.
Not a bug:
The status of the bug when it is not a bug.

Priority:
Assigned by the project manager who asks the programmers to fix bugs in priority order.

Resolution:
Defines the current status of the problem. There are four types of resolution such as deferred, not a problem, will not fix, and as designed
Defect(bug) report:
An incident report defining the type of defect and the circumstances in which they occurred. (defect tracking system)
The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made
regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate
these processes. A variety of commercial problem-tracking/management software tools are available. The following are items to consider in the tracking process:

• Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.
• Bug identifier (number, ID, etc.)
• Current bug status (e.g., 'Released for Retest', 'New', etc.)
• The application name or identifier and version
• The function, module, feature, object, screen, etc. where the bug occurred
• Environment specifics, system, platform, relevant hardware specifics
• Test case name/number/identifier
• One-line bug description
• Full bug description
• Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test
script/test tool
• Names and/or descriptions of file/data/messages/etc. used in test
• File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem
• Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
• Was the bug reproducible?
• Tester name
• Test date
• Bug reporting date
• Name of developer/group/organization the problem is assigned to
• Description of problem cause
• Description of fix
• Code section/file/module/class/method that was fixed
• Date of fix
• Application version that contains the fix
• Tester responsible for retest
• Retest date
• Retest results
• Regression testing requirements
• Tester responsible for regression tests
• Regression testing results A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers
need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary
capabilities are needed for managers.

Effective methods of writing Defect description


Testing is commonly used to execute software and finding defects. A Defect which describes of any variance or any discrepancies between actual and expected
results in application software. Defects should be documented in such a way that any developer should understand the defect description and he/she can
reproduce in their respective environment.

Defects can be logged by using Tools like Eg Siebel, Track, PVCS etc….and it can also be logged by documenting the defects and maintaining the document in
repository. Testers should write the defect description efficiently which will be useful for others within a project. And the documentation should be transparent.

Best Practices of writing defects descriptions.


· Pre-Requisite of a Defect Document.
· An Abstract of a defect
· Description and Observation of a defect
· Screen shot of a Defect.

Pre-Requisite of a Defect Document


Document should contain few standard details:

- Author Name or Submitter Name


- Type of Defect (Eg: Enhancement, Issue or Defect)
- Submitted Date
- Fixed Date
- Status of the defect
- Project Phase (Eg: version 1.1, 2.0 etc…)
- Version Found (Daily builds)
- Severity (Eg, Critical, major, Minor, Cosmetic)

Abstract of a Defect
Testers should only specify a brief description of a defect

Eg: Unable to update a record

Description and Observation of Defect


In Description column the first few lines should specify an exact problem in the application. And in the following paragraph mention in detail like steps to reproduce
(Eg , Start from the application Logon till the defect was found in the application).

Following with an Observation like (Eg, System displays an error message Eg: “Unable to update the record “. But according to functionality system should update
the updated record). And it will be great if a tester specifies few more observation points like:
- This particular defect occurs in a Particular version (Eg Adobe versions for a Report.)
- This particular defect also found in other modules
- Inconsistency of application while reproducing the defect (Eg, some times able to reproduce and sometimes not)

Screen Shot of a defect


By providing a screen shot along with the defect document it will be very much useful for the developers to exactly identify the defect and the cause. And will be
useful for the testers to verify in future of that particular defect.

Tips for Screen shot:


- Screen shot should be self explanatory
- A Figure like arrow , box or circle can be highlighted (were exactly the problem accrued this type highlighting will be helpful for GUI / Cosmetic related defects)
- Use different colors for specific descriptions

Conclusion
By giving the brief description of defect the
· Easy to analyze and cause of defect.
· Easy to fix the defect
· Avoid re-work
· Testers can save time
· Defect duplication can be avoided.
· Keeping track for defects
Preventing bugs
It can be psychologically difficult for some engineers to accept that their design contains bugs. They may hide behind euphemisms like "issues" or "unplanned
features". This is also true of corporate software where a fix for a bug is often called "a reliability enhancement".

Bugs are a consequence of the nature of the programming task. Some bugs arise from simple oversights made when computer programmers write source code
carelessly or transcribe data incorrectly. Many off-by-one errors fall into this category. Other bugs arise from unintended interactions between different parts of a
computer program. This happens because computer programs are often complex, often having been programmed by several different people over a great length
of time, so that programmers are unable to mentally keep track of every possible way in which different parts can interact (the so-called hrair limit). Many race
condition bugs fall into this category.

The computer software industry has put a great deal of effort into finding methods for preventing programmers from inadvertently introducing bugs while writing
software. These include:

Programming techniques
Bugs often create inconsistencies in the internal data of a running program. Programs can be written to check the consistency of their own internal data while
running. If an inconsistency is encountered, the program can immediately halt, so that the bug can be located and fixed. Alternatively, the program can simply
inform the user, attempt to correct the inconsistency, and continue running.
Development methodologies
There are several schemes for managing programmer activity, so that fewer bugs are produced. Many of these fall under the discipline of software engineering
(which addresses software design issues as well.) For example, formal program specifications are used to state the exact behavior of programs, so that design
bugs can be eliminated. Programming language support
Programming languages often include features which help programmers deal with bugs, such as exception handling. In addition, many recently-invented
languages have deliberately excluded features which can easily lead to bugs. For example, the Java programming language does not support pointer arithmetic.

Debugging
Finding and fixing bugs, or "debugging", has always been a major part of computer programming. Maurice Wilkes, an early computing pioneer, described his
realization in the late 1940s that much of the rest of his life would be spent finding mistakes in his own programs. As computer programs grow more complex, bugs
become more common and difficult to fix. Often programmers spend more time and effort finding and fixing bugs than writing new code.
Usually, the most difficult part of debugging is locating the erroneous part of the source code. Once the mistake is found, correcting it is usually easy. Programs
known as debuggers exist to help programmers locate bugs. However, even with the aid of a debugger, locating bugs is something of an art.
Typically, the first step in locating a bug is finding a way to reproduce it easily. Once the bug is reproduced, the programmer can use a debugger or some other tool
to monitor the execution of the program in the faulty region, and (eventually) find the problem. However, it is not always easy to reproduce bugs. Some bugs are
triggered by inputs to the program which may be difficult for the programmer to re-create. One cause of the Therac-25 radiation machine deaths was a bug that
occurred only when the machine operator very rapidly entered a treatment plan; it took days of practice to become able to do this, so the bug did not manifest in
testing or when the manufacturer attempted to duplicate it. Other bugs may disappear when the program is run with a debugger; these are heisenbugs
(humorously named after the Heisenberg uncertainty principle.)
Debugging is still a tedious task requiring considerable manpower. Since the 1990s, particularly following the Ariane 5 Flight 501 disaster, there has been a
renewed interest in the development of effective automated aids to debugging. For instance, methods of static analysis by abstract interpretation have already
made significant achievements, while still remaining much of a work in progress
Common types of computer bugs (1)

* Divide by zero
* Infinite loops
* Arithmetic overflow or underflow
* Exceeding array bounds
* Using an uninitialized variable
* Accessing memory not owned (Access violation)
* Memory leak or Handle leak
* Stack overflow or underflow
* Buffer overflow
* Deadlock
* Off by one error
* Race hazard
* Loss of precision in type conversion

* ISO 9126, which classifies a bug as either a defect or a nonconformity


ISO 9126 is an international standard for the evaluation of software. It will be overseen by the project SQuaRE, ISO 25000:2005, which follows the same general
concepts.

The standard is divided into four parts which adresses, respectively, the following subjects: quality model; external metrics; internal metrics; and quality in use
metrics.

The quality model stablished in the first part of the standard, ISO 9126-1, classifies software quality in a structured set of factors as follows:

* Functionality - A set of attributes that bear on the existence of a set of functions and their specified properties. The functions are those that satisfy stated or
implied needs.
o Suitability
o Accuracy
o Interoperability
o Compliance
o Security
* Reliability - A set of attributes that bear on the capability of software to maintain its level of performance under stated conditions for a stated period of time.
o Maturity
o Recoverability
o Fault Tolerance
* Usability - A set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated or implied set of users.
o Learnability
o Understandability
o Operability
* Efficiency - A set of attributes that bear on the relationship between the level of performance of the software and the amount of resources used, under stated
conditions.
o Time Behaviour
o Resource Behaviour
* Maintainability - A set of attributes that bear on the effort needed to make specified modifications.
o Stability
o Analysability
o Changeability
o Testability
* Portability - A set of attributes that bear on the ability of software to be transferred from one environment to another.
o Installability
o Conformance
o Replaceability
o Adaptability

The sub-characteristic Conformance is not listed above and applies to all characteristics. Examples are conformance to legislation concerning Usability or
Reliability.

Each quality sub-characteristic (as adaptability) is further divided into attributes. An attribute is an entity which can be verified or measured in the software product.
Attributes are not defined in the standard, as they vary between different software products.

Software product is defined in a broad sense: it encompasses executables, source code, architecture descriptions, and so on. As a result, the notion of user
extends to operators as well as to programmers, which are users of components as software libraries.

The standard provides a framework for organizations to define a quality model for a software product. On doing so, however, it leaves up to each organization the
task of specifying precisely its own model. This may be done, for example, by specifying target values for quality metrics which evaluates the degree of presence
of quality attributes.

Internal metrics are those which does not rely on software execution (static measures).

External metrics are applicable to running software.

Quality in use metrics are only available when the final product is used in real conditions.

Ideally, the internal quality determines the external quality and this one determines the results of quality in use.
This standard stems from the model established in 1977 by McCall and his colleagues, who proposed a model to specify software quality. The McCall quality
model is organized around three types of Quality Characteristics:
* Factors (To specify): They describe the external view of the software, as viewed by the users.
* Criteria (To build): They describe the internal view of the software, as seen by the developer.
* Metrics (To control): They are defined and used to provide a scale and method for measurement.

ISO 9126 distinguishes between a defect and a nonconformity, a defect being The nonfulfilment of intended usage requirements, whereas a nonconformity is The
nonfulfilment of specified requirements. A similar distinction is made between validation and verification, known as V&V in the testing trade.
Glitch City, a Pokémon programming error that creates a jumble of pixels.

A glitch is a short-lived fault in a system. The term is particularly common in the computing and electronics industries, and in circuit bending, as well as among
players of video games, although it is applied to all types of systems including human organizations. The term derives from the German glitschen, meaning 'to slip.'

In electronics, a glitch is an electrical pulse of short duration that is usually the result of a fault or design error, particularly in a digital circuit. For example, many
electronic components such as flip-flops are triggered by a pulse that must not be shorter than a specified minimum duration, otherwise the component may
malfunction. A pulse shorter than the specified minimum is called a glitch. A related concept is the runt pulse, a pulse whose amplitude is smaller than the minimum
level specified for correct operation, and a spike, a short pulse similar to glitch but often caused by ringing or crosstalk.
In video games, a glitch is a term used by players to indicate a bug or programming error of some sort. It may refer to either a helpful or harmful error, but never an
intended behavior. A programming error that makes the game freeze is often referred to as a glitch, as is an error that, for example, gives the player 100 lives
automatically. The occurrence of some glitches can be replicated deliberately by doing certain tasks in a certain order. For example, the Missingno., 'M, and Glitch
City glitches in the Pokémon series follow this principle. The Mew glitch also works on the same principle.
The practise of exploiting glitches in video games is known as "glitching." For example, in an online game someone may use an error in the map to get an
advantage. This is sometimes considered cheating, but sometimes just considered part of the game. It is often against a game's TOS (Terms of Service) and will
be punished if discovered.
Sometimes glitches will be mistaken for hidden features. In the arcade version of Mortal Kombat, a rare glitch occasionally caused two characters to be mixed
together. Most often, these were ninja characters, resulting in a semi-red ninja character with the name "ERMAC" (short for "error machine"). Upon discovering
this, many players believed they had uncovered a secret character, when in fact they had only uncovered a programming bug. Due to the rumors surrounding the
glitch, Midway did eventually include a red ninja character named Ermac as an official character in Ultimate Mortal Kombat 3, and he has subsequently appeared
in other Mortal Kombat games, becoming an instant fan favorite.

A workaround is a bypass of a recognized problem in a system. A workaround is typically a temporary fix that implies that a genuine solution to the problem is
needed. Frequently workarounds are as creative as true solutions, involving out-of-the-box thinking in their creation.
Typically they are considered brittle in that they will not respond well to further pressure from a system beyond the original design. In implementing a workaround it
is important to flag the change so as to later implement a proper solution.
Placing pressure on a workaround may result in later failures in the system. For example, in computer programming workarounds are often used to address a
problem in a library, such as an incorrect return value. When the library is changed, the workaround may break the overall program functionality, since it may
expect the older, wrong behaviour from the library
A bugtracker is a ticket tracking system that is designed especially to manage problems (software bugs) with computer programs.
Typically bug tracking software allows the user to quickly enter bugs and search on them. In addition some allow users to specify a workflow for a bug that
automates a bug's lifecycle.
Most bug tracking software allows the administrator of the system to configure what fields are included on a bug.
Having a bug tracking solution is critical for most systems. Without a good bug tracking solution bugs will eventually get lost or poorly prioritized.

Bugzilla is a general-purpose bug-tracking tool originally developed and used by the Mozilla Foundation. Since Bugzilla is web-based and is free software / open-
source software, it is also the bug tracking tool of choice for many projects, both open source and proprietary.
Bugzilla relies on an installed web server (such as Apache) and a database management system (such as MySQL or PostgreSQL) to perform its work. Bugs can
be submitted by anybody, and will be assigned to a particular developer. Various status updates for each bug are allowed, together with user notes and bug
examples.
Bugzilla's notion of a bug is very general; for instance, mozilla.org uses it to track feature requests as well.
Requirements
Release notes such as those for Bugzilla 2.20 indicate the exact set of dependencies, which include:
* A compatible database server (often a version of MySQL)
* A suitable release of Perl 5
* An assortment of Perl modules
* A compatible web server such as Apache (though any web server that supports CGI can work)
* A suitable mail transfer agent such as Sendmail, qmail, Postfix, or Exim
Anti-patterns, also referred to as pitfalls, are classes of commonly-reinvented bad solutions to problems. They are studied, as a category, in order that they may be
avoided in the future, and that instances of them may be recognized when investigating non-working systems.
The term originates in computer science, from the Gang of Four's Design Patterns book, which laid out examples of good programming practice. The authors
termed these good methods "design patterns", and opposed them to "anti-patterns". Part of good programming practice is the avoidance of anti-patterns.
The concept is readily applied to engineering in general, and also applies outside engineering, in any human endeavour. Although the term is not commonly used
outside engineering, the concept is quite universal.
Some recognised computer programming anti-patterns

* Abstraction inversion: Creating simple constructs on top of complex (Controversial)


* Accidental complexity: Introducing unnecessary complexity into a solution

* Action at a distance: Unexpected interaction between widely separated parts of a system * Accumulate and fire: Setting parameters for subroutines in a collection
of global variables
* Ambiguous viewpoint: Presenting a model (usually OOAD) without specifying its viewpoint
* BaseBean: Inheriting functionality from a utility class rather than delegating to it
* Big ball of mud: A system with no recognisable structure
* Blind faith: Lack of checking of (a) the correctness of a bug fix or (b) the result of a subroutine
* Blob: see God object
* Boat anchor: Retaining a part of a system that has no longer any use
* Busy spin: Consuming CPU while waiting for something to happen, usually by repeated checking instead of proper messaging
* Caching failure: Forgetting to reset an error flag when an error has been corrected
* Checking type instead of interface: Checking that an object has a specific type when only a certain contract is required
* Code momentum: Over-constraining part of a system by repeatedly assuming things about it in other parts
* Coding by exception: Adding new code to handle each special case as it is recognised
* Copy and paste programming: Copying (and modifying) existing code without creating generic solutions
* De-Factoring: The process of removing functionality and replacing it with documentation
* DLL hell: Problems with versions, availability and multiplication of DLLs
* Double-checked locking: Checking, before locking, if this is necessary in a way which may fail with e.g. modern hardware or compilers.
* Empty subclass failure: Creating a (Perl) class that fails the "Empty Subclass Test" by behaving differently from a class derived from it without modifications
* Gas factory: An unnecessarily complex design
* God object: Concentrating too many functions in a single part of the design (class)
* Golden hammer: Assuming that a favorite solution is universally applicable
* Improbability factor: Assuming that it is improbable that a known error becomes effective
* Input kludge: Failing to specify and implement handling of possibly invalid input
* Interface bloat: Making an interface so powerful that it is too hard to implement
* Hard code: Embedding assumptions about the environment of a system at many points in its implementation
* Lava flow: Retaining undesirable (redundant or low-quality) code because removing it is too expensive or has unpredictable consequences
* Magic numbers: Including unexplained numbers in algorithms
* Magic pushbutton: Implementing the results of user actions in terms of an inappropriate (insufficiently abstract) interface
* Object cesspool: Reusing objects whose state does not conform to the (possibly implicit) contract for re-use
* Premature optimization: Optimization on the basis of insufficient information
* Poltergeists: Objects whose sole purpose is to pass information to another object
* Procedural code (when another paradigm is more appropriate)
* Race hazard: Failing to see the consequence of different orders of events
* Re-Coupling: The process of introducing unnecessary object dependency
* Reinventing the wheel: Failing to adopt an existing solution
* Reinventing the square wheel: Creating a poor solution when a good one exists
* Smoke and mirrors: Demonstrating how unimplemented functions will appear
* Software bloat: Allowing successive versions of a system to demand ever more resources
* Spaghetti code: Systems whose structure is barely comprehensible, especially because of misuse of code structures
* Stovepipe system: A barely maintainable assemblage of ill-related components
* Yo-yo problem: A structure (e.g. of inheritance) that is hard to understand due to excessive fragmentation

Some Organisational Anti-patterns

* Analysis paralysis: Devoting disproportionate effort to the analysis phase of a project


* Continuous obsolescence: Devoting disproportionate effort to porting a system to new environments
* Creeping featurism: Adding new features to the detriment of the quality of a system
* Design by committee: The result of having many contributors to a design, but no unifying vision
* Escalation of commitment: Failing to revoke a decision when it proves wrong
* I told you so: When the ignored warning of an expert proves justified
* Management by numbers: Paying excessive attention to quantitative management criteria, when these are inessential or cost too much to acquire
* Mushroom management: Keeping employees uninformed and abused
* Scope creep: Allowing the scope of a project to grow without proper control
* Vendor lock-in: Making a system excessively dependent on an externally supplied component
* Warm body: A person whose contribution to a project is in doubt, especially if taken on in panic

Some social anti-patterns

The status of some of these is likely to be controversial.

* Censorship: Suppressing discussion prevents political, social, and scientific progress


* Concentrated power: Individuals abuse power, even if initially well-meaning
* Dictatorship: No individual has all the skills necessary to govern; also power corrupts
* Discrimination: Discrimination on irrelevant features yields economic inefficiency and social resentment
* Dogmatic religion: Dogma suppresses individual thought and prevents progress
* Intolerance: Insisting on changing undesirable-but-harmless features of other people causes resentment and is an endless task
* Monopoly: Without competition most of the effects of a free market don't occur, and a private company has no incentive to do business fairly
* Plurality voting system: Politics under plurality voting degenerates into two highly-polarised parties, with all other political views suppressed
* Popularity contest: Popularity becomes a self-reinforcing quality, and is unrelated to any useful measure of merit
* Segregation: Separate but equal is rarely, if ever, equal; causes resentment
* Single-party system: Without electoral competition the party has no incentive to govern fairly
* Totalitarianism: Suppressing individuality causes resentment, and the approved way of life is never even remotely suitable for everyone
* Victimless crime: Suppressing harmless behaviour creates a subculture of otherwise-law-abiding people for whom the legal system is an enemy
* Witch hunt: Scapegoats are easy to find, but if the problem is never actually solved then more scapegoats will always be required
* Year Zero: Social change is an inherently slow process; rushing it yields disasterBit rot is a colloquial computing term used to facetiously describe the
spontaneous degradation of a software program over time. The term implies that software can literally wear out or rust like a physical tool. Bit rot is also used to
describe the discredited idea [1] that a computer's memory may occasionally be altered by cosmic rays. More commonly, bit rot refers to the decay of physical
storage mediums.
When a program that has been running correctly for an extended time suddenly malfunctions for no apparent reason, programmers often jokingly attribute the
failure to bit rot. Such an effect may be due to a memory leak or other nonobvious software bug. Many times, although there is no obvious change in the program's
operating environment, a subtle difference has occurred that is triggering a latent software error.
Bit rot is often defined as the event in which the small electric charge of a bit in memory disperses, possibly altering program code.
Bit rot can also be used to describe the very real phenomenon of data stored in EPROMs gradually decaying over the duration of many years, or in the decay of
data stored on CD or DVD disks or other types of consumer storage.
The cause of bit rot varies depending on the medium. Floppy disk and Magnetic Tape storage may experience bit rot as bits lose magnetic orientation. In CDs and
DVDs the breakdown of the material onto which the data is stored may cause bit rot. This can be mitigated by storing disks in a dark, cool location with low
humidity. Archival quality disks are also available. Old punch cards may experience a more literal form of bit rot, as the paper onto which the programs are stored
begins to rot.
Rarely, bit rot is referred to as the process by which data becomes inaccessible due to the lack of working devices to read old data storage formats. (For example,
a game stored on a Floppy Disk may be referred to as having succumbed to bit rot if the user no longer possesses a floppy disk drive to read the disk). See also:
Link rot, Code rot

Unusual software bugs are more difficult to understand and repair than ordinary software bugs. There are several kinds, mostly named after scientists who
discovered counterintuitive things.

Heisenbugs
A heisenbug is a computer bug that disappears or alters its characteristics when it is researched.
Common examples are bugs that occur in a release-mode compile of a program but do not occur when researched under debug-mode, or some bugs caused by a
race condition. The name is a pun on the physics term "Heisenberg Uncertainty principle", which is popularly believed to refer to the way observers affect the
observed in quantum mechanics.
In an interview in ACM Queue vol. 2, no. 8 - November 2004 Bruce Lindsay tells of being there when the term was first used and that it was created because
Heisenberg said "the more closely you look at one thing, the less closely can you see something else."
A Bohr bug (named after the Bohr atom model) is a bug that, in contrast with heisenbugs, does not disappear or alter its characteristics when it is researched.
Mandelbugs
A mandelbug (named after fractal innovator Benoît Mandelbrot) is a computer bug whose causes are so complex that its behavior appears chaotic. This word also
implies that the speaker thinks it is a Bohr bug rather than a heisenbug.
It can be argued, according to same principle as the Turing test, that if there is no way for a judge to differentiate between a bug whose behavior appears chaotic
and a bug whose behavior actually is chaotic, then there is no relevance in the distinction between mandelbug and heisenbug, since there is no way to tell them
apart.
Some use mandelbug to describe a bug whose behavior does not appear chaotic, but whose causes are so complex that there is no practical solution. An example
of this is a bug caused by a flaw in the fundamental design of the entire system.
Schroedinbugs
A Schroedinbug is a bug that manifests itself apparently only after the software is used in an unusual way or seemingly at the point in time that a programmer
reading the source code notices that the program should never have worked in the first place, at which point the program stops working entirely until the
mysteriously now non-functioning code is repaired. FOLDOC, in a statement of apparent jest, adds: "Though... this sounds impossible, it happens; some programs
have harboured latent schroedinbugs for years."
The name schroedinbug is derived from the Schrödinger's cat thought experiment. A well written program executing in a reliable computing environment is
expected to follow the principle of determinism, and as such the quantum questions of observability (i.e. breaking the program by reading the source code) posited
by Schrödinger (i.e. killing the cat by opening the box) cannot actually affect the operation of a program. However, quickly repairing an obviously defective piece of
code is often more important than attempting to determine by what arcane set of circumstances it accidentally worked in the first place or exactly why it stopped.
By declaring that the code could never have worked in the first place despite evidence to the contrary, the complexity of the computing system is causing the
programmer to fall back on superstition. For example, a database program may have initially worked on a small number of records, including test data used during
development, but broke once the amount of data reached a certain limit, without this cause being at all intuitive. A programmer without knowing the cause, and
who didn't bother to consider the normal uptick in the database size as a factor in the breakage, could label the defect a schroedinbug.
Appearance in Fiction
In the independent movie 'Schrödinger's Cat', a Schroedinbug is found in the programming of American defense systems, causing a catastrophic security failure.
Software Quality Assurance
Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and
procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.

Software Quality Assurance


(1) A planned and systematic pattern of all actions necessary to provide adequate confidence that an item or product conforms to established technical
requirements.
(2) A set of activities designed to evaluate the process by which products are developed or manufactured.
Software Quality Assurance Activities

• Application of Technical Methods (Employing proper methods and tools for developing software)
• Conduct of Formal Technical Review (FTR)
• Testing of Software
• Enforcement of Standards (Customer imposed standards or management imposed standards)
• Control of Change (Assess the need for change, document the change)
• Measurement (Software Metrics to measure the quality, quantifiable)
• Records Keeping and Recording (Documentation, reviewed, change control etc. i.e. benefits of docs).

What is software quality?


The quality of the software varies widely from system to system. Some common quality attributes are stability, usability, reliability, portability, and maintainability.
See quality standard ISO 9126 for more information on this subject.

What is quality?
Quality software is software that is reasonably bug-free, delivered on time and within budget, meets requirements and expectations and is maintainable. However,
quality is a subjective term. Quality depends on who the customer is and their overall influence in the scheme of things. Customers of a software development
project include end-users, customer acceptance test engineers, testers, customer contract officers, customer management, the development organization's
management, test engineers, testers, salespeople, software engineers, stockholders and accountants. Each type of customer will have his or her own slant on
quality. The accounting department might define quality in terms of profits, while an end-user might define quality as user friendly and bug free.

Software Testing
Software testing is a critical component of the software engineering process. It is an element of software quality assurance and can be described as a process of
running a program in such a manner as to uncover any errors. This process, while seen by some as tedious, tiresome and unnecessary, plays a vital role in
software development.

Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while
using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally
attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'.

Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they're the combined responsibility of one group or individual.
Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers.
It will depend on what best fits an organization's size and business structure.

What is software testing?


1) Software testing is a process that identifies the correctness, completenes, and quality of software. Actually, testing cannot establish the correctness of software.
It can find defects, but cannot prove there are no defects.
2) It is a systematic analysis of the software to see whether it has performed to specified requirements. What software testing does is to uncover errors however it
does not tell us that errors are still not present.

Quality Assurance
(1)The planned systematic activities necessary to ensure that a component, module, or system conforms to established technical requirements.
(2) All actions that are taken to ensure that a development organization delivers products that meet performance requirements and adhere to standards and
procedures.
(3) The policy, procedures, and systematic actions established in an enterprise for the purpose of providing and maintaining some degree of confidence in data
integrity and accuracy throughout the life cycle of the data, which includes input, update, manipulation, and output.
(4) (QA) The actions, planned and performed, to provide confidence that all systems and components that influence the quality of the product are working as
expected individually and collectively.

Quality Control
The operational techniques and procedures used to achieve quality requirements.

What is test methodology?


Test methodology is up to the end client, and can be used, reused, and molded to your end client's needs. Rob Davis believes that using the right test
methodology is important in the development and ongoing maintenance of his clients' applications.

What is software testing methodology?


One software testing methodology is the use a three step process of...
1. Creating a test strategy;
2. Creating a test plan/design; and
3. Executing tests. This methodology can be used and molded to your organization's needs. Rob Davis believes that using this methodology is important in the
development and ongoing maintenance of his clients' applications.

What is the general testing process?


The general testing process is the creation of a test strategy (which sometimes includes the creation of test cases), creation of a test plan/design (which usually
includes test cases and test procedures) and the execution of tests. Test data are inputs that have been devised to test the system
Test Cases are inputs and outputs specification plus a statement of the function under the test.
Test data can be generated automatically (simulated) or real (live).

The stages in the testing process are as follows:


1. Unit testing: (Code Oriented)
Individual components are tested to ensure that they operate correctly. Each component is tested independently, without other system components.

2. Module testing:
A module is a collection of dependent components such as an object class, an abstract data type or some looser collection of procedures and functions. A module
encapsulates related components so it can be tested without other system modules.

3. Sub-system testing: (Integration Testing) (Design Oriented)


This phase involves testing collections of modules, which have been integrated into sub-systems. Sub-systems may be independently designed and implemented.
The most common problems, which arise in large software systems, are sub-systems interface mismatches. The sub-system test process should therefore
concentrate on the detection of interface errors by rigorously exercising these interfaces.

4. System testing:
The sub-systems are integrated to make up the entire system. The testing process is concerned with finding errors that result from unanticipated interactions
between sub-systems and system components. It is also concerned with validating that the system meets its functional and non-functional requirements.

5. Acceptance testing:
This is the final stage in the testing process before the system is accepted for operational use. The system is tested with data supplied by the system client rather
than simulated test data. Acceptance testing may reveal errors and omissions in the systems requirements definition( user - oriented) because real data exercises
the system in different ways from the test data. Acceptance testing may also reveal requirement problems where the system facilities do not really meet the users
needs (functional) or the system performance (non-functional) is unacceptable.

Acceptance testing is sometimes called alpha testing. Bespoke systems are developed for a single client. The alpha testing process continues until the system
developer and the client agrees that the delivered system is an acceptable implementation of the system requirements.
When a system is to be marketed as a software product, a testing process called beta testing is often used.

Beta testing involves delivering a system to a number of potential customers who agree to use that system. They report problems to the system developers. This
exposes the product to real use and detects errors that may not have been anticipated by the system builders. After this feedback, the system is modified and
either released fur further beta testing or for general sale.
Software Testing Strategies
A strategy for software testing integrates software test case design techniques into a well - planned series of steps that result in the successful construction of
software.

Common Characteristics of Software Testing Strategies


-Testing begins at module level and works outward towards the integration of the entire system.
-Different testing techniques are appropriate at different points in time.
-Testing is conducted by the developer of the software and for large projects by an independent test group.
-Testing and debugging are different activities, but debugging must be accommodated in any testing strategy.

from low-level to high level (Testing in Stages)


Except for small programs, systems should not be tested as a single unit. Large systems are built out of sub-systems, which are built out of modules that are
composed of procedures and functions. The testing process should therefore proceed in stages where testing is carried out incrementally in conjunction with
system implementation.
The most widely used testing process consists of five stages
Unit Testing
Component testing
Module Testing Verification White Box Testing Techniques
Sub-system Testing (Process Oriented) (Tests that are derived from knowledge of the program's structure and implementation)
Integrated testing
System Testing
Validation Black Box Testing Techniques
User testing Acceptance Testing
(Product Oriented) (Tests are derived from the program specification)
However, as defects are discovered at any one stage, they require program modifications to correct them and this may require other stages in the testing process
to be repeated.
Errors in program components, say may come to light at a later stage of the testing process. The process is therefore an iterative one with information being fed
back from later stages to earlier parts of the process.

Testing Strategies
Strategy is a general approach rather than a method of devising particular systems for component tests.
Different strategies may be adopted depending on the type of system to be tested and the development process used. The testing strategies are

Top-Down Testing
Bottom - Up Testing
Thread Testing
Stress Testing
Back- to Back Testing
1. Top-down testing
Where testing starts with the most abstract component and works downwards.

2. Bottom-up testing
Where testing starts with the fundamental components and works upwards.

3. Thread testing
Which is used for systems with multiple processes where the processing of a transaction threads its way through these processes.

4. Stress testing
Which relies on stressing the system by going beyond its specified limits and hence testing how well the system can cope with over-load situations.

5. Back-to-back testing
Which is used when versions of a system are available. The systems are tested together and their outputs are compared. 6. Performance testing.
This is used to test the run-time performance of software.

7. Security testing.
This attempts to verify that protection mechanisms built into system will protect it from improper penetration.

8. Recovery testing.
This forces software to fail in a variety ways and verifies that recovery is properly performed.

Large systems are usually tested using a mixture of these strategies rather than any single approach. Different strategies may be needed for different parts of the
system and at different stages in the testing process.

Whatever testing strategy is adopted, it is always sensible to adopt an incremental approach to sub-system and system testing. Rather than integrate all
components into a system and then start testing, the system should be tested incrementally. Each increment should be tested before the next increment is added
to the system. This process should continue until all modules have been incorporated into the system.

When a module is introduced at some stage in this process, tests, which were previously unsuccessful, may now, detect defects. These defects are probably due
to interactions with the new module. The source of the problem is localized to some extent, thus simplifying defect location and repai

Debugging
Brute force, backtracking, cause elimination.
Unit Testing Coding Focuses on each module and whether it works properly. Makes heavy use of white box testing
Centered on making sure that each module works with another module.
Comprised of two kinds:
Top-down and
Integration Testing Design
Bottom-up integration.
Or focuses on the design and construction of the software architecture.
Makes heavy use of Black Box testing.(Either answer is acceptable)
Validation Testing Analysis Ensuring conformity with requirements
Making sure that the software product works with the external environment, e.g., computer system, other software
Systems Testing Systems Engineering
products.
Driver and Stubs

Driver: dummy main program


Stub: dummy sub-program
This is because the modules are not yet stand-alone programs therefore drive and or stubs have to be developed to test each unit
How do you create a test plan/design?
Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken
into test procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report
results. Generally speaking...
* Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application.
* Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases.
* It is the test team that, with assistance of developers and clients, develops test cases and scenarios for integration and system testing.
* Test scenarios are executed through the use of test procedures or scripts.
* Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.
* Test procedures or scripts include the specific data that will be used for testing the process or transaction.
* Test procedures or scripts may cover multiple test scenarios.
* Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope.
* Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a
controlled environment.
* Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing.
* A pretest meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate
the status of the entrance criteria of the release.
Inputs for this process:
* Approved Test Strategy Document.
* Test tools, or automated test tools, if applicable.
* Previously developed scripts, if applicable.
* Test documentation problems uncovered as a result of testing.
* A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document,
source code, and software complexity data.
Outputs for this process:
* Approved documents of test scenarios, test cases, test conditions, and test data.
* Reports of software design issues, given to software developers for correction.

What is the purpose of a test plan?


Reason number 1: We create a test plan because preparing it helps us to think through the efforts needed to validate the acceptability of a software product.
Reason number 2: We create a test plan because it can and will help people outside the test group to understand the why and how of product validation.
Reason number 3: We create a test plan because, in regulated environments, we have to have a written test plan.
Reason number 4: We create a test plan because the general testing process includes the creation of a test plan.
Reason number 5: We create a test plan because we want a document that describes the objectives, scope, approach and focus of the software testing effort.
Reason number 6: We create a test plan because it includes test cases, conditions, the test environment, a list of related tasks, pass/fail criteria, and risk
assessment.
Reason number 7: We create test plan because one of the outputs for creating a test strategy is an approved and signed off test plan document.
Reason number 8: We create a test plan because the software testing methodology a three step process, and one of the steps is the creation of a test plan.
Reason number 9: We create a test plan because we want an opportunity to review the test plan with the project team.
Reason number 10: We create a test plan document because test plans should be documented, so that they are repeatable.

How do test plan templates look like?


The test plan document template helps to generate test plan documents that describe the objectives, scope, approach and focus of a software testing effort. Test
document templates are often in the form of documents that are divided into sections and subsections. One example of a template is a 4-section document where
section 1 is the description of the "Test Objective", section 2 is the the description of "Scope of Testing", section 3 is the the description of the "Test Approach", and
section 4 is the "Focus of the Testing Effort".
All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. They also help in learning where
information is located, making it easier for a user to find what they want. With standards and templates, information will not be accidentally omitted from a
document. Once Rob Davis has learned and reviewed your standards and templates, he will use them. He will also recommend improvements and/or additions.
A software project test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test
plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the
test group understand the why and how of product validation.
What is a test schedule?
The test schedule is a schedule that identifies all tasks required for a successful testing effort, a schedule of all test activities and resource requirements.

How do you create a test strategy?


The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team
analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test
environment, a list of related tasks, pass/fail criteria and risk assessment.
Inputs for this process:
* A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.
* A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.
* Testing methodology. This is based on known standards.
* Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.
* Requirements that the system can not provide, e.g. system limitations.
Outputs for this process:
* An approved and signed off test strategy document, test plan, including test cases.
* Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.

What is the purpose of test strategy?


Reason number 1: The number one reason of writing a test strategy document is to "have" a signed, sealed, and delivered, FDA (or FAA) approved document,
where the document includes a written testing methodology, test plan, and test cases.
Reason number 2: Having a test strategy does satisfy one important step in the software testing process.
Reason number 3: The test strategy document tells us how the software product will be tested.
Reason number 4: The creation of a test strategy document presents an opportunity to review the test plan with the project team.
Reason number 5: The test strategy document describes the roles, responsibilities, and the resources required for the test and schedule constraints.
Reason number 6: When we create a test strategy document, we have to put into writing any testing issues requiring resolution (and usually this means additional
negotiation at the project management level).
Reason number 7: The test strategy is decided first, before lower level decisions are made on the test plan, test design, and other testing issues.

What does a test strategy document contain?


The test strategy document contains test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment. The test strategy
document is a formal description of how a software product will be tested. What is the test strategy document developed for? It is developed for all levels of testing,
as required. How is it written, and who writes it? It is the test team that analyzes the requirements, writes the test strategy, and reviews the plan with the project
team.

How do you execute tests?


Execution of tests is completed by following the test documents in a methodical manner. As each test procedure is performed, an entry is recorded in a test
execution log to note the execution of the procedure and whether or not the test procedure uncovered any defects. Checkpoint meetings are held throughout the
execution phase. Checkpoint meetings are held daily, if required, to address and discuss testing issues, status and activities.
* The output from the execution of test procedures is known as test results. Test results are evaluated by test engineers to determine whether the expected results
have been obtained. All discrepancies/anomalies are logged and discussed with the software team lead, hardware test lead, programmers, software engineers
and documented for further investigation and resolution. Every company has a different process for logging and reporting bugs/defects uncovered during testing.
* A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a test summary report. The severity of a problem, found during
system testing, is defined in accordance to the customer's risk assessment and recorded in their selected tracking tool.
* Proposed fixes are delivered to the testing environment, based on the severity of the problem. Fixes are regression tested and flawless fixes are migrated to a
new baseline. Following completion of the test, members of the test team prepare a summary report. The summary report is reviewed by the Project Manager,
Software QA Manager and/or Test Team Lead.
* After a particular level of testing has been certified, it is the responsibility of the Configuration Manager to coordinate the migration of the release software
components to the next test level, as documented in the Configuration Management Plan. The software is only migrated to the production environment after the
Project Manager's formal acceptance.
* The test team reviews test document problems identified during testing, and update documents where appropriate.
Inputs for this process:
* Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.
* Test tools, including automated test tools, if applicable.
* Developed scripts.
* Changes to the design, i.e. Change Request Documents.
* Test data.
* Availability of the test team and project team.
* General and Detailed Design Documents, i.e. Requirements Document, Software Design Document.
* A software that has been migrated to the test environment, i.e. unit tested code, via the Configuration/Build Manager.
* Test Readiness Document.
* Document Updates.
Outputs for this process:
* Log and summary of the test results. Usually this is part of the Test Report. This needs to be approved and signed-off with revised testing deliverables.
* Changes to the code, also known as test fixes.
* Test document problems uncovered as a result of testing. Examples are Requirements document and Design Document problems.
* Reports on software design issues, given to software developers for correction. Examples are bug reports on code issues.
* Formal record of test incidents, usually part of problem tracking.
* Base-lined package, also known as tested source and object code, ready for migration to the next level
Traceability matrix
(2)A matrix that records the relationship between two or more products; e.g., a matrix that records the relationship between the requirements and the design of a
given software component. See: traceability, traceability analysis.
(1)Traceability Matrix is for mapping the requirements to Test cases. To verify whether all the test cases covering all the stated requirements or not. The purpose of
the Traceability Matrix is to identify all business requirements and to trace each requirement through the project's completion.

What is a requirements test matrix?


The requirements test matrix is a project management tool for tracking and managing testing efforts, based on requirements, throughout the project's life cycle.
The requirements test matrix is a table, where requirement descriptions are put in the rows of the table, and the descriptions of testing efforts are put in the column
headers of the same table.
The requirements test matrix is similar to the requirements traceability matrix, which is a representation of user requirements aligned against system functionality.
The requirements traceability matrix ensures that all user requirements are addressed by the system integration team and implemented in the system integration
effort.
The requirements test matrix is a representation of user requirements aligned against system testing. Similarly to the requirements traceability matrix, the
requirements test matrix ensures that all user requirements are addressed by the system test team and implemented in the system testing effort.

Can you give me a requirements test matrix template?


For a requirements test matrix template, you want to visualize a simple, basic table that you create for cross-referencing purposes.
Step 1: Find out how many requirements you have.
Step 2: Find out how many test cases you have.
Step 3: Based on these numbers, create a basic table. If you have a list of 90 requirements and 360 test cases, you want to create a table of 91 rows and 361
columns.
Step 4: Focus on the the first column of your table. One by one, copy all your 90 requirement numbers, and paste them into rows 2 through 91 of the table.
Step 5: Now switch your attention to the the first row of the table. One by one, copy all your 360 test case numbers, and paste them into columns 2 through 361 of
the table.
Step 6: Examine each of your 360 test cases, and, one by one, determine which of the 90 requirements they satisfy. If, for the sake of this example, test case
number 64 satisfies requirement number 12, then put a large "X" into cell 13-65 of your table... and then you have it; you have just created a requirements test
matrix template that you can use for cross-referencing purposes.

What metrics are used for bug tracking?


Metrics that can be used for bug tracking include the followings: the total number of bugs, total number of bugs that have been fixed, number of new bugs per
week, and the number of fixes per week. Metrics for bug tracking can be used to determine when to stop testing, for example, when bug rate falls below a certain
level. You CAN learn to use defect tracking software.

What metrics are used for test report generation?


Metrics that can be used for test report generation include...
McCabe metrics: cyclomatic complexity metric (v(G)), actual complexity metric (AC), module design complexity metric (iv(G)), essential complexity metric (ev(G)),
pathological complexity metric (pv(G)), design complexity metric (S0), integration complexity metric (S1), object integration complexity metric (OS1), global data
complexity metric (gdv(G)), data complexity metric (DV), tested data complexity metric (TDV), data reference metric (DR), tested data reference metric (TDR),
maintenance severity metric (maint_severity), data reference severity metric (DR_severity), data complexity severity metric (DV_severity), global data severity
metric (gdv_severity).
McCabe object-oriented software metrics: encapsulation percent public data (PCTPUB), access to public data (PUBDATA), polymorphism percent of unoverloaded
calls (PCTCALL), number of roots (ROOTCNT), fan-in (FANIN), quality maximum v(G) (MAXV), maximum ev(G) (MAXEV), and hierarchy quality (QUAL).
Other object-oriented software metrics: depth (DEPTH), lack of cohesion of methods (LOCM), number of children (NOC), response for a class (RFC), weighted
methods per class (WMC), Halstead software metrics program length, program volume, program level and program difficulty, intelligent content, programming
effort, error estimate, and programming time.
Line count software metrics: lines of code, lines of comment, lines of mixed code and comments, and lines left blank.
Validation
Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its
predetermined specifications and quality attributes. Validation typically involves actual testing and takes place after verifications are completed.

Validation (Product Oriented)


Validation is concerned with whether the right functions of the program have been properly implemented, and that this function will properly produce the correct
output given some input value.

What is validation?
Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place
after verifications are completed.

What is a walk-through?
A walk-through (in software QA) is an informal meeting for evaluation or informational purposes. A walk-through is also a process at an abstract level. It's the
process of inspecting software code by following paths through the code (as determined by input conditions and choices made along the way).
The purpose of code walk-throughs (in software development) is to ensure the code fits the purpose. Walk-throughs also offer opportunities to assess an
individual's or team's competency.
A walk-through is also a static analysis technique in which a programmer leads participants through a segment of documentation or code, and the participants ask
questions, and make comments about possible errors, violations of development standards, and other issues.

What is verification?
Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans,
code, requirements and specifications; this can be done with checklists, issues lists, walk-throughs and inspection meetings.
Verification (Process Oriented)
Verification involves checking to see whether the program conforms to specification. I.e the right tools and methods have been employed. Thus, it focuses on
process correctness.

Validation and verification testing


Used as an entity to define a procedure of review, analysis, and testing throughout the software life cycle to discover errors, determine functionality, and ensure the
production of quality software.

What is V&V?
"V&V" is an acronym that stands for verification and validation.
"Validation: are we building the product right"
"Verification: are we building the right product"

Verification and validation (V&V) is a process that helps to determine if the software requirements are complete, correct; and if the software of each development
phase fulfills the requirements and conditions imposed by the previous phase; and if the final software complies with the applicable software requirements.

What is the difference between verification and validation?


Verification takes place before validation, and not vice versa.
Verification evaluates documents, plans, code, requirements, and specifications. Validation, on the other hand, evaluates the product itself.
The inputs of verification are checklists, issues lists, walkthroughs and inspection meetings, reviews and meetings. The input of validation, on the other hand, is the
actual testing of an actual product.
The output of verification is a nearly perfect set of documents, plans, specifications, and requirements document. The output of validation, on the other hand, is a
nearly perfect, actual product.

software verification
In general the demonstration of consistency, completeness, and correctness of the software at each stage and between each stage of the development life cycle.
Life Cycle
The period that starts when a software product is conceived and ends when the product is no longer available for use. The software life cycle typically includes a
requirements phase, design phase, implementation (code) phase, test phase, installation and checkout phase, operation and maintenance phase, and a
retirement phase.

software life cycle


Software life cycle begins when a software product is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements
analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates,
retesting, phase-out, and other aspects.

What is SDLC?
SDLC is an acronym. It stands for "software development life cycle".

What models are used in the software development life cycle?


In software development life cycle the following models are used: waterfall model, incremental development model, rapid prototyping model, and spiral model.

Audit
(1)An independent examination of a work product or set of work products to assess compliance with specifications, standards, contractual agreements, or other
criteria.
(2)To conduct an independent review and examination of system records and activities in order to test the adequacy and effectiveness of data security and data
integrity procedures, to ensure compliance with established policy and operational procedures, and to recommend any necessary changes.

Boundary value
(1)A data value that corresponds to a minimum or maximum input, internal, or output value specified for a system or component.
(2)A value which lies at, or just inside or just outside a specified range of valid input and output values.

Boundary value analysis


A selection technique in which test data are chosen to lie along "boundaries" of the input domain [or output range] classes, data structures, procedure parameters,
etc. Choices often include maximum, minimum, and trivial values or parameters. This technique is often called stress testing.
or
A test data selection technique in which values are chosen to lie along data extremes. Boundary values include maximum, mini-mum, just inside/outside
boundaries, typical values, and error values.

Branch coverage
A test coverage criteria which requires that for each decision point each possible branch be executed at least once. Syn: decision coverage. Contrast with
condition coverage, multiple condition coverage, path coverage, statement coverage.

Equivalence Partitioning
Input data of a program is divided into different categories so that test cases can be developed for each category of input data. The goal of equivalence partitioning
is to come out with test cases so that errors are uncovered and test cases can be carried out more efficiently. The different categories of input data are called
Equivalence Classes.
Manual Testing
That part of software testing that requires operator input, analysis, or evaluation.
or
A manual test is a test for which there is no automation. Instead, test steps are outlined in a document for the tester to complete. The tester can then report test
results and submit bugs as appropriate.
Mean
A value derived by adding several qualities and dividing the sum by the number of these quantities.

Measurement
The act or process of measuring. A figure, extent, or amount obtained by measuring.

Cause effect graph


A Boolean graph linking causes and effects. The graph is actually a digital-logic circuit (a combinatorial logic network) using a simpler notation than standard
electronics notation.

Cause effect graphing


(1)Test data selection technique. The input and output domains are partitioned into classes and analysis is performed to determine which input classes cause
which effect. A minimal set of inputs is chosen which will cover the entire effect set.
(2)A systematic method of generating test cases representing combinations of conditions.

What is the difference between efficient and effective?


"Efficient" means having a high ratio of output to input; which means working or producing with a minimum of waste. For example, "An efficient engine saves gas."
Or, "An efficient test engineer saves time".
"Effective", on the other hand, means producing or capable of producing an intended result, or having a striking effect. For example, "For rapid long-distance
transportation, the jet engine is more effective than a witch's broomstick". Or, "For developing software test procedures, engineers specializing in software testing
are more effective than engineers who are generalists".
Bug
(1)A fault in a program which causes the program to perform in an unintended or unanticipated manner. See: anomaly, defect, error, exception, fault.
(2)A bug is a glitch in computer software or hardware (where something doesn't do what it is supposed to do). Since computers and computer software are very
complicated to design, human beings will make mistakes in the design. Unfortunately, in the rush to market, many of these mistakes are not found until after a
product has shipped. This is why fixes (also called patches) are often posted on web sites. When considering the quality of a product, one must consider not only
the number of bugs, but also the value of the features of a program, since a feature-rich program is likely to have more bugs than a "plain-vanilla" program. 3)A
design flaw that will result in symptoms exhibited by some object (the object under test or some other object) when an object is subjected to an appropriate test.

What is a bug life cycle?


Bug life cycles are similar to software development life cycles. At any time during the software development life cycle errors can be made during the gathering of
requirements, requirements analysis, functional design, internal design, documentation planning, document preparation, coding, unit testing, test planning,
integration, testing, maintenance, updates, re-testing and phase-out.
Bug life cycle begins when a programmer, software developer, or architect makes a mistake, creates an unintentional software defect, i.e. bug, and ends when the
bug is fixed, and the bug is no longer in existence.
What should be done after a bug is found? When a bug is found, it needs to be communicated and assigned to developers that can fix it. After the problem is
resolved, fixes should be re-tested.
Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn't create
other problems elsewhere.
If a problem-tracking system is in place, it should encapsulate these determinations. A variety of commercial, problem-tracking, management software tools are
available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an
idea of its severity, reproduce it and fix it.

What stage of bug fixing is the most cost effective?


Bug prevention techniques (i.e. inspections, peer design reviews, and walk-throughs) are more cost effective than bug detection.

What is the difference between a software bug and software defect?


"Software bug" is nonspecific; it means an inexplicable defect, error, flaw, mistake, failure, fault, or unwanted behavior of a computer program. Other terms, e.g.
"software defect", or "software failure", are more specific.
While the word "bug" has been a part of engineering jargon for many-many decades; many-many decades ago even Thomas Edison, the great inventor, wrote
about a "bug" - today there are many who believe the word "bug" is a reference to insects that caused malfunctions in early electromechanical computers.

What is the difference between a software bug and software defect?


In software testing, the difference between "bug" and "defect" is small, and also depends on the end client. For some clients, bug and defect are synonymous,
while others believe bugs are subsets of defects.
Difference number one: In bug reports, the defects are easier to describe.
Difference number two: In my bug reports, it is easier to write descriptions as to how to replicate defects. In other words, defects tend to require only brief
explanations.
Commonality number one: We, software test engineers, discover both bugs and defects, before bugs and defects damage the reputation of our company.
Commonality number two: We, software QA engineers, use the software much like real users would, to find both bugs and defects, to find ways to replicate both
bugs and defects, to submit bug reports to the developers, and to provide feedback to the developers, i.e. tell them if they've achieved the desired level of quality.
Commonality number three: We, software QA engineers, do not differentiate between bugs and defects. In our reports, we include both bugs and defects that are
the results of software testing.

What's the difference between priority and severity?


The word "priority" is associated with scheduling, and the word "severity" is associated with standards. "Priority" means something is afforded or deserves prior
attention; a precedence established by urgency or order of or importance.
Severity is the state or quality of being severe; severe implies adherence to rigorous standards or high principles and often suggests harshness; severe is marked
by or requires strict adherence to rigorous standards or high principles. For example, a severe code of behavior.
The words priority and severity do come up in bug tracking. A variety of commercial, problem-tracking / management software tools are available. These tools, with
the detailed input of software test engineers, give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it
and fix it. The fixes are based on project priorities and severity of bugs. The severity of a problem is defined in accordance to the end client's risk assessment, and
recorded in their selected tracking tool. A buggy software can severely affect schedules, which, in turn can lead to a reassessment and renegotiation of priorities.
Top-Down Strategy
Top down integration is basically an approach where modules are developed and tested starting at the top level of the programming hierarchy and continuing with
the lower levels.
It is an incremental approach because we proceed one level at a time. It can be done in either "depth" or "breadth" manner.
- Depth means we proceed from the top level all the way down to the lowest level.
- Breadth, on the other hand, means that we start at the top of the hierarchy and then go to the next level. We develop and test all modules at this level before
continuing with another level.
Either way, this testing procedure allows us to establish a complete skeleton of the system or product.
The benefits of Top-down integration are that, having the skeleton, we can test major functions early in the development process.
At the same time we can also test any interfaces that we have and thus discover any errors in that area very early on. But the major benefit of this procedure is
that we have a partially working model to demonstrate to the clients and the top management. This of course builds everybody's confidence not only in the
development team but also in the model itself. We have something that proves our design was correct and we took the correct approach to implement it.
However, there are some drawbacks to this procedure as well:
Using stubs does not permit all the necessary upward data flow. There is simply not enough data in the stubs to feed back to the calling module.
As a result, the top level modules can not be really tested properly and every time the stubs are replaced with the actual modules, the calling modules should be
re-tested for integrity again.

Bottom-Up Strategy
Bottom-up approach, as the name suggests, is the opposite of the Top-down method.
This process starts with building and testing the low level modules first, working its way up the hierarchy.
Because the modules at the low levels are very specific, we may need to combine several of them into what is sometimes called a cluster or build in order to test
them properly.
Then to test these builds, a test driver has to be written and put in place.
The advantage of Bottom-up integration is that there is no need for program stubs as we start developing and testing with the actual modules.
Starting at the bottom of the hierarchy also means that the critical modules are usually build first and therefore any errors in these modules are discovered early in
the process.
As with Top-down integration, there are some drawbacks to this procedure.
In order to test the modules we have to build the test drivers which are more complex than stubs. And in addition to that they themselves have to be tested. So
more effort is required.
A major disadvantage to Bottom-up integration is that no working model can be presented or tested until many modules have been built.

Big-Bang Strategy
Big-Bang approach is very simple in its philosophy where basically all the modules or builds are constructed and tested independently of each other and when
they are finished, they are all put together at the same time.
The main advantage of this approach is that it is very quick as no drivers or stubs are needed, thus cutting down on the development time.
However, as with anything that is quickly slapped together, this process usually yields more errors than the other two. Since these errors have to be fixed and take
more time to fix than errors at the module level, this method is usually considered the least effective.
Because of the amount of coordination that is required it is also very demanding on the resources. Another drawback is that there is really nothing to demonstrate
until all the modules have been built and integrated
Inspection
An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the
inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything.
Attendees should prepare for this type of meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection
meeting should be a written report. Thorough preparation for inspections is difficult, painstaking work, but is one of the most cost effective methods of ensuring
quality. Employees who are most skilled at inspections are like the 'eldest brother' in the parable in 'Why is it often hard for management to get serious about
quality assurance?'. Their skill may have low visibility but they are extremely valuable to any software development organization, since bug prevention is far more
cost-effective than bug detection.
or
1) A formal evaluation technique in which software requirements, design, or code are examined in detail by a person or group other than the author to detect faults,
violations of development standards, and other problems. 2) A quality improvement process for written material that consists of two dominant components: product
(document) improvement and process improvement (document production and inspection). Instrument: To install or insert devices or instructions into hardware or
software to monitor the operation of a system or component.

What is an inspection?
An inspection is a formal meeting, more formalized than a walk-through and typically consists of 3-10 people including a moderator, reader (the author of whatever
is being reviewed) and a recorder (to make notes in the document). The subject of the inspection is typically a document, such as a requirements document or a
test plan. The purpose of an inspection is to find problems and see what is missing, not to fix anything. The result of the meeting should be documented in a
written report. Attendees should prepare for this type of meeting by reading through the document, before the meeting starts; most problems are found during this
preparation. Preparation for inspections is difficult, but is one of the most cost-effective methods of ensuring quality, since bug prevention is more cost effective
than bug detection.

What is good code?


A good code is code that works, is free of bugs and is readable and maintainable. Organizations usually have coding standards all developers should adhere to,
but every programmer and software engineer has different ideas about what is best and what are too many or too few rules. We need to keep in mind that
excessive use of rules can stifle both productivity and creativity. Peer reviews and code analysis tools can be used to check for problems and enforce standards.

Code inspection
A manual [formal] testing [error detection] technique where the programmer reads source code, statement by statement, to a group who ask questions analyzing
the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
Contrast with code audit, code review, code walkthrough. This technique can also be applied to other software and configuration items.

Code review
A meeting at which software code is presented to project personnel, managers, users, customers, or other interested parties for comment or approval. Contrast
with code audit, code inspection, code walkthrough.

Code walkthrough
A manual testing [error detection] technique where program [source code] logic [structure] is traced manually [mentally] by a group with a small set of test cases,
while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions. Contrast with code audit, code inspection, code
review
Coverage analysis
Determining and assessing measures associated with the invocation of program structural elements to determine the adequacy of a test run. Coverage analysis is
useful when attempting to execute each statement, branch, path, or iterative structure in a program. Tools that capture this data and provide reports summarizing
relevant information have this feature.

Crash
The sudden and complete failure of a computer system or component.

Criticality
The degree of impact that a requirement, module, error, fault, failure, or other item has on the development or operation of a system. Syn: severity.

Cyclomatic complexity
(1)The number of independent paths through a program.
(2)The cyclomatic complexity of a program is equivalent to the number of decision statements plus 1.

Error
A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition.

Error guessing
Test data selection technique. The selection criterion is to pick values that seem likely to cause errors.

Error seeding
error seeding. (IEEE) The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection
and removal, and estimating the number of faults remaining in the program. Contrast with mutation analysis.

Eexception
An event that causes suspension of normal program execution. Types include addressing exception, data exception, operation exception, overflow exception,
protection exception, and underflow exception.
Failure
The inability of a system or component to perform its required functions within specified performance requirements.

What is software failure?


Software failure occurs when the software does not do what the user expects to see.

What is the difference between software fault and software failure?


Software failure occurs when the software does not do what the user expects to see. Software fault, on the other hand, is a hidden programming error.
A software fault becomes a software failure only when the exact computation conditions are met, and the faulty portion of the code is executed on the CPU. This
can occur during normal usage. Or, when the software is ported to a different hardware platform. Or, when the software is ported to a different complier. Or, when
the software gets extended.

Fault
An incorrect step, process, or data definition in a computer program which causes the program to perform in an unintended or unanticipated manner.

What is a software fault?


Software faults are hidden programming errors. Software faults are errors in the correctness of the semantics of computer programs.

Review
A process or meeting during which a work product or set of work products, is presented to project personnel, managers, users, customers, or other interested
parties for comment or approval. Types include code review, design review, formal qualification review, requirements review, test readiness review. Contrast with
audit, inspection.

Risk
A measure of the probability and severity of undesired effects. Often taken as the simple product of probability and consequence.

Risk Assessment
A comprehensive evaluation of the risk and its associated impact.
Software Review
An evaluation of software elements to ascertain discrepancies from planned results and to recommend improvement. This evaluation follows a formal process.

Static analysis
(1) Analysis of a program that is performed without executing the program.
(2)The process of evaluating a system or component based on its form, structure, content, documentation. Contrast with dynamic analysis.

Test
An activity in which a system or component is executed under specified conditions, the results are observed or recorded and an evaluation is made of some
aspect of the system or component.

Testability
(1) The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have
been met.
(2) The degree to which a requirement is stated in terms that permit establishment of test criteria and performance of tests to determine whether those criteria
have been met.

before creating test cases to "break the system", a few principles have to be observed:
Testing should be based on user requirements. This is in order to uncover any defects that might cause the program or system to fail to meet the client's
requirements.
Testing time and resources are limited. Avoid redundant tests.
It is impossible to test everything. Exhaustive tests of all possible scenarios are impossible, simple because of the many different variables affecting the system
and the number of paths a program flow might take.
Use effective resources to test. This represents use of the most suitable tools, procedures and individuals to conduct the tests. The test team should use tools that
they are confident and familiar with. Testing procedures should be clearly defined. Testing personnel may be a technical group of people independent of the
developers.
Test planning should be done early. This is because test planning can begin independently of coding and as soon as the client requirements are set.
Testing should begin at the module. The focus of testing should be concentrated on the smallest programming units first and then expand to other parts of the
system.
We look at software testing in the traditional (procedural) sense and then describe some testing strategies and methods used in Object Oriented environment. We
also introduce some issues with software testing in both environments.
Test case
Documentation specifying inputs, predicted results, and a set of execution conditions for a test item.
A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test
case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through
the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.
or
The definition of test case differs from company to company, engineer to engineer, and even project to project. A test case usually includes an identified set of
information about observable states, conditions, events, and data, including inputs and expected outputs.

What is a test case?


A test case is a document that describes an input, action, or event and its expected result, in order to determine if a feature of an application is working correctly. A
test case should contain particulars such as a...
* Test case identifier;
* Test case name;
* Objective;
* Test conditions/setup;
* Input data requirements/steps, and
* Expected results.
Please note, the process of developing test cases can help find problems in the requirements or design of an application, since it requires you to completely think
through the operation of the application. For this reason, it is useful to prepare test cases early in the development cycle, if possible.

Test Case Design


Test cases should be designed in such a way as to uncover quickly and easily as many errors as possible. They should "exercise" the program by using and
producing inputs and outputs that are both correct and incorrect. Variables should be tested using all possible values (for small ranges) or typical and out-of-bound
values (for larger ranges). They should also be tested using valid and invalid types and conditions. Arithmetical and logical comparisons should be examined as
well, again using both correct and incorrect parameters. The objective is to test all modules and then the whole system as completely as possible using a
reasonably wide range of conditions.

How do test case templates look like?


Software test case templates are blank documents that describe inputs, actions, or events, and their expected results, in order to determine if a feature of an
application is working correctly. Test case templates contain all particulars of test cases. For example, one test case template is in the form of a 6-column table,
where column 1 is the "test case ID number", column 2 is the "test case name", column 3 is the "test objective", column 4 is the "test conditions/setup", column 5 is
the "input data requirements/steps", and column 6 is the "expected results".
All documents should be written to a certain standard and template. Why? Because standards and templates do help to maintain document uniformity. Also
because they help you to learn where information is located, making it easier for users to find what they want. Also because, with standards and templates,
information is not be accidentally omitted from documents.

Test case generator


A software tool that accepts as input source code, test criteria, specifications, or data structure definitions; uses these inputs to generate test input data; and,
sometimes, determines expected results
How do you write test cases?
When I write test cases, I concentrate on one requirement at a time. Then, based on that one requirement, I come up with several real life scenarios that are likely
to occur in the use of the application by an end user.
When I write test cases, I describe the inputs, action, or event, and their expected results, in order to determine if a feature of an application is working correctly. To
make the test case complete, I also add particulars e.g. test case identifiers, test case names, objectives, test conditions (or setups), input data requirements (or
steps), and expected results.
Additionally, if I have a choice, I like writing test cases as early as possible in the development life cycle. Why? Because, as a side benefit of writing test cases,
many times I am able to find problems in the requirements or design of an application. And, because the process of developing test cases makes me completely
think through the operation of the application.

What is a test scenario?


The terms "test scenario" and "test case" are often used synonymously. Test scenarios are test cases or test scripts, and the sequence in which they are to be
executed. Test scenarios are test cases that ensure that all business process flows are tested from end to end. Test scenarios are independent tests, or a series of
tests that follow each other, where each of them dependent upon the output of the previous one. Test scenarios are prepared by reviewing functional requirements,
and preparing logical groups of functions that can be further broken into test procedures. Test scenarios are designed to represent both typical and unusual
situations that may occur in the application. Test engineers define unit test requirements and unit test scenarios. Test engineers also execute unit test scenarios. It
is the test team that, with assistance of developers and clients, develops test scenarios for integration and system testing. Test scenarios are executed through the
use of test procedures or scripts. Test procedures or scripts define a series of steps necessary to perform one or more test scenarios. Test procedures or scripts
may cover multiple test scenarios.
Scenario-based Testing
This form of testing concentrates on what the user does. It basically involves capturing the user actions and then simulating them and similar actions during the
test. These tests tend to find interaction type of errors

What is the difference between a test plan and a test scenario?


Difference number 1: A test plan is a document that describes the scope, approach, resources, and schedule of intended testing activities, while a test scenario is
a document that describes both typical and atypical situations that may occur in the use of an application.
Difference number 2: Test plans define the scope, approach, resources, and schedule of the intended testing activities, while test procedures define test
conditions, data to be used for testing, and expected results, including database updates, file outputs, and report results.
Difference number 3: A test plan is a description of the scope, approach, resources, and schedule of intended testing activities, while a test scenario is a
description of test cases that ensure that a business process flow, applicable to the customer, is tested from end to end.
What is good design?
Design could mean to many things, but often refers to functional design or internal design. Good functional design is indicated by software functionality can be
traced back to customer and end-user requirements. Good internal design is indicated by software code whose overall structure is clear, understandable, easily
modifiable and maintainable; is robust with sufficient error handling and status logging capability; and works correctly when implemented.

Test design
Documentation specifying the details of the test approach for a software feature or combination of software features and identifying the associated tests.

Test documentation
Documentation describing plans for, or results of, the testing of a system or component, Types include test case specification, test incident report, test log, test
plan, test procedure, test report.

Test driver
A software module used to invoke a module under test and, often, provide test inputs, control and monitor execution, and report test results.

Test incident report


A document reporting on any event that occurs during testing that requires further investigation.

Test item
A software item which is the object of testing.

Test log.
A chronological record of all relevant details about the execution of a test.

Test phase
The period of time in the software life cycle in which the components of a software product are evaluated and integrated, and the software product is evaluated to
determine whether or not requirements have been satisfied.

Test plan
Documentation specifying the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing
tasks, responsibilities, required, resources, and any risks requiring contingency planning.
or
A formal or informal plan to be followed to assure the controlled testing of the product under test.

What is a test plan?


A software project test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test
plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the
test group understand the why and how of product validation. It should be thorough enough to be useful, but not so thorough that none outside the test group will
be able to read it.

Test procedure
A formal document developed from a test plan that presents detailed instructions for the setup, operation, and evaluation of the results for each defined test.
Technical Review
A review that refers to content of the technical material being reviewed.

Test Development
The development of anything required to conduct testing. This may include test requirements (objectives), strategies, processes, plans, software, procedures,
cases, documentation, etc.

Test Executive
Another term for test harness.

Test Harness
A software tool that enables the testing of software components that links test capabilities to perform specific tests, accept program inputs, simulate missing
components, compare actual outputs with expected outputs to determine correctness, and report discrepancies.

Test Objective
An identified set of software features to be measured under specified conditions by comparing actual behavior with the required behavior described in the software
documentation.
Test Procedure
The formal or informal procedure that will be followed to execute a test. This is usually a written document that allows others to execute the test with a minimum of
training.

Testing
Any activity aimed at evaluating an attribute or capability of a program or system to determine that it meets its required results. The process of exercising or
evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements or to identify differences between
expected and actual results.

Top-down Testing
An integration testing technique that tests the high-level components first using stubs for lower-level called components that have not yet been integrated and that
stimulate the required actions of those components
Test report
A document describing the conduct and results of the testing carried out for a system or system component.

Test result analyzer


A software tool used to test output data reduction, formatting, and printing.

Testing <
(1)The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of
the system or component.
(2) The process of analyzing a software item to detect the differences between existing and required conditions, i.e. bugs, and to evaluate the features of the
software items.

Acceptance Testing
Testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the
system. Contrast with testing, development; testing, operational.
or
Formal testing conducted to determine whether or not a system satisfies its acceptance criteria—enables an end user to determine whether or not to accept the
system.

Boundary value Testing


A testing technique using input values at, just below, and just above, the defined limits of an input domain; and with input values causing outputs to be at, just
below, and just above, the defined limits of an output domain.

What is boundary value analysis?


Boundary value analysis is a technique for test data selection. A test engineer chooses values that lie along data extremes. Boundary values include maximum,
minimum, just inside boundaries, just outside boundaries, typical values, and error values. The expectation is that, if a systems works correctly for these extreme
or special values, then it will work correctly for all values in between. An effective way to test code is to exercise it at its natural boundaries.

Boundary Value Analysis is a method of testing that complements equivalence partitioning. In this case, data input as well as data output are tested. The rationale
behind BVA is that the errors typically occur at the boundaries of the data. The boundaries refer to the upper limit and the lower limit of a range of values or more
commonly known as the "edges" of the boundary.

Branch Testing
Testing technique to satisfy coverage criteria which require that for each decision point, each possible branch [outcome] be executed at least once. Contrast with
testing, path; testing, statement
Alpha Testing
Acceptance testing performed by the customer in a controlled environment at the developer's site. The software is used by the customer in a setting approximating
the target environment with the developer observing and recording errors and usage problems.
or
Testing of a software product or system conducted at the developer’s site by the end user.

What is alpha testing?


Alpha testing is testing of an application when development is nearing completion. Minor design changes can still be made as a result of alpha testing. Alpha
testing is typically performed by a group that is independent of the design team, but still within the company, e.g. in-house software test engineers, or software QA
engineers.
Alpha testing is final testing before the software is released to the general public. First, (and this is called the first phase of alpha testing), the software is tested by
in-house developers. They use either debugger software, or hardware-assisted debuggers. The goal is to catch bugs quickly. Then, (and this is called second
stage of alpha testing), the software is handed over to us, the software QA staff, for additional testing in an environment that is similar to the intended use.

Assertion Testing
A dynamic analysis technique which inserts assertions about the relationship between program variables into the program code. The truth of the assertions is
determined as the program executes.

Beta Testing
(1)Acceptance testing performed by the customer in a live application of the software, at one or more end user sites, in an environment not controlled by the
developer.
(2) For medical device software such use may require an Investigational Device Exemption [IDE] or Institutional Review Board [IRB] approval.
or
Testing conducted at one or more end user sites by the end user of a delivered software product or system.
What is beta testing?
Beta testing is testing an application when development and testing are essentially completed and final bugs and problems need to be found before the final
release. Beta testing is typically performed by end-users or others, not programmers, software engineers, or test engineers.
Following alpha testing, "beta versions" of the software are released to a group of people, and limited public tests are performed, so that further testing can ensure
the product has few bugs. Other times, beta versions are made available to the general public, in order to receive as much feedback as possible. The goal is to
benefit the maximum number of future users.

What is the difference between alpha and beta testing?


Alpha testing is performed by in-house developers and in-house software QA personnel. Beta testing is performed by the public, a few select prospective
customers, or the general public. Beta testing is performed after the alpha testing is completed.
Compatibility Testing
The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working
program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems.

What is incremental testing?


Incremental testing is partial testing of an incomplete product. The goal of incremental testing is to provide an early feedback to software developers.

Integration
The process of combining software components or hardware components, or both, into an overall system.

Integration Testing
An orderly progression of testing in which software elements, hardware elements, or both are combined and tested, to evaluate their interactions, until the entire
system has been integrated.

OO Integration Testing
This strategy involves testing the classes as they are integrated into the system. The traditional approach would test each operation separately as they are
implemented into a class. In OO system this approach is not viable because of the "direct and indirect interactions of the components that make up the class"
. Integration testing in OO can be performed in two basic ways :
- Thread-based - Takes all the classes needed to react to a given input. Each class is unit tested and then thread constructed from these classes tested as a set.
- Uses-based - Tests classes in groups. Once the group is tested, the next group that uses the first group (dependent classes) is tested. Then the group that uses
the second group and so on. Use of stubs or drivers may be necessary. Cluster testing is similar to testing builds in the traditional model. Basically collaborating
classes are tested in clusters.

What is integration testing?


Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer
requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test
team. Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on
client input.

What is incremental integration testing


Incremental integration testing is continuous testing of an application as new functionality is recommended. This may require that various aspects of an
application's functionality are independent enough to work separately, before all parts of the program are completed, or that test drivers are developed as needed.
Incremental testing may be performed by programmers, software engineers, or test engineers.

How do you perform integration testing?


To perform integration testing, first, all unit testing has to be completed. Upon completion of unit testing, integration testing begins. Integration testing is black box
testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are
developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team.
Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable, or acceptable, based on client
input
Exhaustive Testing
Executing the program with all possible combinations of values for program variables. Feasible only for small, simple programs.

Functional Testing
(1) Testing that ignores the internal mechanism or structure of a system or component and focuses on the outputs generated in response to selected inputs and
execution conditions.
(2) Testing conducted to evaluate the compliance of a system or component with specified functional requirements and corresponding predicted results. (3)
Application of test data derived from the specified functional requirements without regard to the final program structure. Also known as black-box testing.

Interface Testing
Testing conducted to evaluate whether systems or components pass data and control correctly to one another. Contrast with testing, unit; testing, system.

Mutation Testing
A testing methodology in which two or more program mutations are executed using the same test cases to evaluate the ability of the test cases to detect
differences in the mutations.

Operational Testing
Testing conducted to evaluate a system or component in its operational environment. Contrast with testing, development; testing, acceptance;

Parallel Testing
Testing a new or an altered data processing system with the same source data that is used in another system. The other system is considered as the standard of
comparison.
Audit
An inspection/assessment activity that verifies compliance with plans, policies, and procedures, and ensures that resources are conserved. Audit is a staff function;
it serves as the “eyes and ears” of management.

What is parallel/audit testing?


Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the
operations correctly
Path Testing
Testing to satisfy coverage criteria that each logical path through the program be tested. Often paths through the program are grouped into a finite set of classes.
One path from each class is then tested.

Performance Testing
Functional testing conducted to evaluate the compliance of a system or component with specified performance requirements.

What is performance testing?


Although performance testing is described as a part of system testing, it can be regarded as a distinct level of testing. Performance testing verifies loads, volumes
and response times, as defined by requirements.
Performance testing verifies loads, volumes, and response times, as defined by requirements. Although performance testing is a part of system testing, it can be
regarded as a distinct level of testing.

What are the parameters of performance testing?


The term "performance testing" is often used synonymously with stress testing, load testing, reliability testing, and volume testing. Performance testing is part of
system testing, but it's also a distinct level of testing. Performance testing verifies loads, volumes, and response times, as defined by requirements.

Qualification Testing
Formal testing, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements.

Statement Testing
Testing to satisfy the criterion that each statement in a program be executed at least once during program testing.

Storage Testing
This is a determination of whether or not certain processing conditions use more storage [memory] than estimated
Regression Testing
Rerunning test cases which a program has previously executed correctly in order to detect errors spawned by changes or corrections made during software
development and maintenance.

What is regression testing?


The objective of regression testing is to ensure the software remains intact. A baseline set of data and scripts is maintained and executed to verify changes
introduced during the release have not "undone" any previous code. Expected results from the baseline are compared to results of the software under test. All
discrepancies are highlighted and accounted for, before testing proceeds to the next level.

What is the objective of regression testing?


The objective of regression testing is to test that the fixes have not created any other problems elsewhere. The objective is to ensure the software has remained
intact. A baseline set of data and scripts are maintained and executed to verify that changes introduced during the release have not "undone" any previous code.
Expected results from the baseline are compared to results of the software under test. All discrepancies have to be highlighted and accounted for, before the
testing can proceed to the next level.

Is regression testing performed manually?


The answer to this question depends on the initial testing approach. If the initial testing approach was manual testing, then the regression testing is usually
performed manually. Conversely, if the initial testing approach was automated testing, then the regression testing is usually performed by automated testing.

What is load testing?


Load testing is testing an application under heavy loads, such as the testing of a web site under a range of loads to determine at what point the system response
time will degrade or fail.

What is load testing?


Load testing simulates the expected usage of a software program, by simulating multiple users that access the program's services concurrently. Load testing is
most useful and most relevant for multi-user systems, client/server models, including web servers. For example, the load placed on the system is increased above
normal usage patterns, in order to test the system's response at peak loads

Stress Testing
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.

What is stress testing?


Stress testing is testing that investigates the behavior of software (and hardware) under extraordinary operating conditions. For example, when a web server is
stress tested, testing aims to find out how many users can be on-line, at the same time, without crashing the server. Stress testing tests the stability of a given
system or entity. It tests something beyond its normal operational capacity, in order to observe any negative results. For example, a web server is stress tested,
using scripts, bots, and various denial of service tools.
What is the difference between stress testing and load testing?
The term, stress testing, is often used synonymously with performance testing, reliability testing, and volume testing, and load testing. Load testing is a blanket
term that is used in many different ways across the professional software testing community. Load testing generally stops short of stress testing. During stress
testing, the load is so great that the expected results are errors, though there is gray area in between stress testing and load testing
Structural Testing
(1)Testing that takes into account the internal mechanism [structure] of a system or component. Types include branch testing, path testing, statement testing.
(2) Testing to insure each program statement is made to execute during testing and that each program statement performs its intended function.

What is structural testing?


Structural testing is white box testing, not black box testing, since black boxes are considered opaque and do not permit visibility into the code.
Structural testing is also known as clear box testing, also known as glass box testing.
Structural testing is a way to test software with knowledge of the internal workings of the code being tested.

System
A collection of people, machines, and methods organized to accomplish a set of specified functions.

System Simulation
Another name for prototyping.

System Testing
The process of testing an integrated hardware and software system to verify that the system meets its specified requirements. Such testing may be conducted in
both the development environment and the target environment.
or
The process of testing an integrated hardware and software system to verify that the system meets its specified requirements.

System Testing
Final stage of the testing process should be System Testing. This type of test involves examination of the whole computer system. All the software components, all
the hardware components and any interfaces.
The whole computer based system is checked not only for validity but also for met objectives.
It should include recovery testing, security testing, stress testing and performance testing.
Recovery testing uses test cases designed to examine how easily and completely the system can recover from a disaster (power shut down, blown circuit, disk
crash, interface failure, insufficient memory, etc.). It is desirable to have a system capable of recovering quickly and with minimal human intervention. It should also
have a log of activities happening before the crash (these should be part of daily operations) and a log of messages during the failure (if possible) and upon re-
start.
Security testing involves testing the system in order to make sure that unauthorized personnel or other systems cannot gain access to the system and information
or resources within it. Programs that check for access to the system via passwords are tested along with any organizational security procedures established.
Stress testing encompasses creating unusual loads on the system in attempts to brake it. System is monitored for performance loss and susceptibility to crashing
during the load times. If it does crash as a result of high load, it provides for just one more recovery test.
Performance testing involves monitoring and recording the performance levels during regular and low and high stress loads. It tests the amount of resource usage
under the just described conditions and serves as basis for making a forecast of additional resources needed (if any) in the future. It is important to note that
performance objectives should have been developed during the planning stage and performance testing is to assure that these objectives are being met. However,
these tests may be run in initial stages of production to compare the actual usage to the forecasted figures.

What is system testing?


System testing is black box testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled
environment.
The purpose of system testing is to validate an application's accuracy and completeness in performing the functions as designed.
System testing simulates real life scenarios that occur in a "simulated real life" test environment and test all functions of the system that are required in real life.
System testing is deemed complete when actual results and expected results are either in line or differences are explainable or acceptable, based on client input.
Upon completion of integration testing, system testing is started. Before system testing, all unit and integration test results are reviewed by Software QA to ensure
all problems have been resolved. For a higher level of testing it is important to understand unresolved problems that originate at unit and integration test levels
Unit Testing
(1) Testing of a module for typographic, syntactic, and logical errors, for correct implementation of its design, and for satisfaction of its requirements. (2) Testing
conducted to verify the implementation of the design for one software element; e.g., a unit or module; or a collection of software elements.

What is unit testing?


Unit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers.
Unit testing is performed after the expected test results are met or differences are explainable/acceptable.

OO Unit Testing
In OO paradigm it is no longer possible to test individual operations as units. Instead they are tested as part of the class and the class or an instance of a class
(object) then represents the smallest testable unit or module. Because of inheritance, testing individual operation separately (independently of the class) would not
be very effective, as they interact with each other by modifying the state of the object they are applied to.

What is usability?
"Usability" means ease of use; the ease with which a user can learn to operate, prepare inputs for, and interpret the outputs of a software product.

Usability Testing
Tests designed to evaluate the machine/user interface. Are the communication device(s) designed in a manner such that the information is displayed in a
understandable fashion enabling the operator to correctly interact with the system?

What is usability testing?


Usability testing is testing for 'user-friendliness'. Clearly this is subjective and depends on the targeted end-user or customer. User interviews, surveys, video
recording of user sessions and other techniques can be used. Programmers and developers are usually not appropriate as usability testers.
Volume Testing
Testing designed to challenge a system's ability to manage the maximum amount of data over a period of time. This type of testing also evaluates a system's
ability to handle overload situations in an orderly fashion
black box testing
Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing are based on requirements and
functionality.

What is closed box testing?


Closed box testing is same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers
neither the code itself, nor the "inner workings" of the software.

What black box testing types can you tell me about?


Black box testing is functional testing, not based on any knowledge of internal software design or code.
Black box testing is based on requirements and functionality. Functional testing is also a black-box type of testing geared to functional requirements of an
application.
System testing is also a black box type of testing. Acceptance testing is also a black box type of testing. Functional testing is also a black box type of testing.
Closed box testing is also a black box type of testing. Integration testing is also a black box type of testing.

What is functional testing?


Functional testing is black-box type of testing geared to functional requirements of an application. Test engineers *should* perform functional testing.
Functional testing is the same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers
neither the code itself, nor the "inner workings" of the software.
Function testing is a testing process that is black-box in nature. It is aimed at examining the overall functionality of the product. It usually includes testing of all the
interfaces and should therefore involve the clients in the process. Because every aspect of the software system is being tested, the specifications for this test
should be very detailed describing who, where, when and how will conduct the tests and what exactly will be tested.
The portion of the testing that will involve the clients is usually conducted as an alpha test where the developers closely monitor how the clients use the system.
They take notes on what needs to be improved.

OO Function Testing and OO System Testing


Function testing of OO software is no different than validation testing of procedural software. Client involvement is usually part of this testing stage. In OO
environment use cases may be used. These are basically descriptions of how the system is to be used.

OO system testing is really identical to its counterpart in the procedural environment


white box testing
White box testing is based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths and
conditions.

What is clear box testing?


Clear box testing is the same as white box testing. It is a testing approach that examines the application's program structure, and derives test cases from the
application's program logic.

What is glass box testing?


Glass box testing is the same as white box testing. It's a testing approach that examines the application's program structure, and derives test cases from the
application's program logic.

What is open box testing?


Open box testing is same as white box testing. It's a testing approach that examines the application's program structure, and derives test cases from the
application's program logic.

What types of white box testing can you tell me about?


Clear box testing, glass box testing, and open box testing.
Clear box testing is white box testing. Glass box testing is also white box testing. Open box testing is also white box testing.
White box testing is a testing approach that examines the application's program structure, and derives test cases from the application's program logic.

What is grey box testing?


Grey box testing is a software testing technique that uses a combination of black box testing and white box testing. Gray box testing is not black box testing,
because the tester does know some of the internal workings of the software under test. In grey box testing, the tester applies a limited number of test cases to the
internal workings of the software under test. In the remaining part of the grey box testing, one takes a black box approach in applying inputs to the software under
test and observing the outputs.
What is ad hoc testing?
Ad hoc testing is a testing approach; it is the least formal testing approach.

What is gamma testing?


Gamma testing is testing of software that does have all the required features, but did not go through all the in-house quality checks. Cynics tend to refer to
software releases as "gamma testing".

What is bottom-up testing?


Bottom-up testing is a technique of integration testing. A test engineer creates and uses test drivers for components that have not yet been developed, because,
with bottom-up testing, low-level components are tested first. The objective of bottom-up testing is to call low-level components first, for testing purposes.
or
An integration testing technique that tests the low-level components first using test drivers for those components that have not yet been developed to call the low-
level components for test.
What is end-to-end testing?
Similar to system testing, the *macro* end of the test scale is testing a complete application in a situation that mimics real world use, such as interacting with a
database, using network communication, or interacting with other hardware, application, or system.

What is sanity testing?


Sanity testing is performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset
of regression testing.
It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.
What is installation testing?
Installation testing is testing full, partial, upgrade, or install/uninstall processes. The installation test for a release is conducted with the objective of demonstrating
production readiness.
This test includes the inventory of configuration items, performed by the application's System Administration, the evaluation of data readiness, and dynamic tests
focused on basic system functionality. When necessary, a sanity test is performed, following installation testing.

What is security/penetration testing?


Security/penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage.
This type of testing usually requires sophisticated testing techniques.

What is recovery/error testing?


Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

What is disaster recovery testing??


"Disaster recovery testing" is testing how well a system recovers from disasters, crashes, hardware failures, or other catastrophic problems.

What is compatibility testing?


Compatibility testing is testing how well software performs in a particular hardware, software, operating system, or network environment.

What is comparison testing?


Comparison testing is testing that compares software weaknesses and strengths to those of competitors' products.

What is acceptance testing?


Acceptance testing is black box testing that gives the client/customer/project manager the opportunity to verify the system functionality and usability prior to the
system being released to production.
The acceptance test is the responsibility of the client/customer or project manager, however, it is conducted with the full support of the project team. The test team
also works with the client/customer/project manager to develop the acceptance criteria.
What is reliability testing?
Reliability testing is designing reliability test cases, using accelerated reliability techniques - for example step-stress, test / analyze / fix, and continuously
increasing stress testing techniques - AND testing units or systems to failure, in order to obtain raw failure time data for product life analysis.
The purpose of reliability testing is to determine product reliability, and to determine whether the software meets the customer's reliability requirements.
In the system test phase, or after the software is fully developed, one reliability testing technique we use is a test / analyze / fix technique, where we couple
reliability testing with the removal of faults.
When we identify a failure, we send the software back to the developers, for repair. The developers build a new version of the software, and then we do another
test iteration.
Then we track failure intensity - for example failures per transaction, or failures per hour - in order to guide our test process, and to determine the feasibility of the
software release, and to determine whether the software meets the customer's reliability requirements.

Can you give me an example on reliability testing?


For example, our products are defibrillators. From direct contact with customers during the requirements gathering phase, our sales team learns that a large
hospital wants to purchase defibrillators with the assurance that 99 out of every 100 shocks will be delivered properly.
In this example, the fact that our defibrillator is able to run for 250 hours without any failure in order to demonstrate the reliability, is irrelevant to these customers.
In order to test for reliability we need to translate terminology that is meaningful to the customers into equivalent delivery units, such as the number of shocks.
Therefore we describe the customer needs in a quantifiable manner, using the customer’s terminology. For example, our quantified reliability testing goal becomes
as follows: Our defibrillator will be considered sufficiently reliable if 10 (or fewer) failures occur from 1,000 shocks.
Then, for example, we use a test / analyze / fix technique, and couple reliability testing with the removal of errors. When we identify a failed delivery of a shock, we
send the software back to the developers, for repair. The developers build a new version of the software, and then we deliver another 1,000 shocks (into dummy
resistor loads). We track failure intensity (i.e. failures per 1,000 shocks) in order to guide our reliability testing, and to determine the feasibility of the software
release, and to determine whether the software meets our customers' reliability requirements.
What is monkey testing?
Monkey testing is random testing performed by automated testing tools (after the latter are developed by humans). These automated testing tools are considered
"monkeys", if they work at random. We call them "monkeys" because it is widely believed that if we allow six monkeys to pound on six typewriters at random, for a
million years, they will recreate all the works of Isaac Asimov.
There are "smart monkeys" and "dumb monkeys". "Smart monkeys" are valuable for load and stress testing; they will find a significant number of bugs, but are
also very expensive to develop. "Dumb monkeys", on the other hand, are inexpensive to develop, are able to do some basic testing, but they will find few bugs.
However, the bugs "dumb monkeys" do find will be hangs and crashes, i.e. the bugs you least want to have in your software product. "Monkey testing" can be
valuable, but they should not be your only testing.

What is stochastic testing?


Stochastic testing is the same as "monkey testing", but stochastic testing is a lot more technical sounding name for the same testing process.
Stochastic testing is black box testing, random testing, performed by automated testing tools. Stochastic testing is a series of random tests over time. The software
under test typically passes the individual tests, but our goal is to see if it can pass a large number of individual tests.

What is mutation testing?


Mutation testing is testing where our goal is to make mutant software fail, and thus demonstrate the adequacy of our test case. How do we perform mutation
testing?
Step one: We create a set of mutant software. In other words, each mutant software differs from the original software by one mutation, i.e. one single syntax
change made to one of its program statements, i.e. each mutant software contains one single fault.
Step two: We write and apply test cases to the original software and to the mutant software.
Step three: We evaluate the results, based on the following set of criteria: Our test case is inadequate, if both the original software and all mutant software
generate the same output. Our test case is adequate, if our test case detects faults in our software, or, if, at least, one mutant software generates a different output
than does the original software for our test case.

What is automated testing?


Automated testing is a formally specified and controlled method of formal testing approach.
or
That part of software testing that is assisted with software tool(s) that does not require operator input, analysis, or evaluation.

What is smoke testing?


Smoke testing is a relatively simple check to see whether the product "smokes" when it runs. Smoke testing is also known as ad hoc testing, i.e. testing without a
formal test plan.
With many projects, smoke testing is carried out in addition to formal testing. If smoke testing is carried out by a skilled tester, it can often find problems that are
not caught during regular testing. Sometimes, if testing occurs very early or very late in the software development life cycle, this can be the only kind of testing that
can be performed.
Smoke testing, by definition, is not exhaustive, but, over time, you can increase your coverage of smoke testing.
A common practice at Microsoft, and some other software companies, is the daily build and smoke test process. This means, every file is compiled, linked, and
combined into an executable file every single day, and then the software is smoke tested.
Smoke testing minimizes integration risk, reduces the risk of low quality, supports easier defect diagnosis, and improves morale. Smoke testing does not have to
be exhaustive, but should expose any major problems. Smoke testing should be thorough enough that, if it passes, the tester can assume the product is stable
enough to be tested more thoroughly. Without smoke testing, the daily build is just a time wasting exercise. Smoke testing is the sentry that guards against any
errors in development and future problems during integration. At first, smoke testing might be the testing of something that is easy to test. Then, as the system
grows, smoke testing should expand and grow, from a few seconds to 30 minutes or more.
What is the difference between monkey testing and smoke testing?
Difference number 1: Monkey testing is random testing, and smoke testing is a nonrandom testing. Smoke testing is nonrandom testing that deliberately exercises
the entire system from end to end, with the the goal of exposing any major problems.
Difference number 2: Monkey testing is performed by automated testing tools, while smoke testing is usually performed manually.
Difference number 3: Monkey testing is performed by "monkeys", while smoke testing is performed by skilled testers.
Difference number 4: "Smart monkeys" are valuable for load and stress testing, but not very valuable for smoke testing, because they are too expensive for smoke
testing.
Difference number 5: "Dumb monkeys" are inexpensive to develop, are able to do some basic testing, but, if we used them for smoke testing, they would find few
bugs during smoke testing.
Difference number 6: Monkey testing is not a thorough testing, but smoke testing is thorough enough that, if the build passes, one can assume that the program is
stable enough to be tested more thoroughly.
Difference number 7: Monkey testing either does not evolve, or evolves very slowly. Smoke testing, on the other hand, evolves as the system evolves from
something simple to something more thorough.
Difference number 8: Monkey testing takes "six monkeys" and a "million years" to run. Smoke testing, on the other hand, takes much less time to run, i.e. from a
few seconds to a couple of hours.

Tell me about daily builds and smoke tests.


The idea is to build the product every day, and test it every day. The software development process at Microsoft and many other software companies requires daily
builds and smoke tests. According to their process, every day, every single file has to be compiled, linked, and combined into an executable program; and then the
program has to be "smoke tested".
Smoke testing is a relatively simple check to see whether the product "smokes" when it runs.
Please note that you should add revisions to the build only when it makes sense to do so. You should to establish a build group, and build daily; set your own
standard for what constitutes "breaking the build", and create a penalty for breaking the build, and check for broken builds every day.
In addition to the daily builds, you should smoke test the builds, and smoke test them Daily. You should make the smoke test evolve, as the system evolves. You
should build and smoke test Daily, even when the project is under pressure.
Think about the many benefits of this process! The process of daily builds and smoke tests minimizes the integration risk, reduces the risk of low quality, supports
easier defect diagnosis, improves morale, enforces discipline, and keeps pressure cooker projects on track. If you build and smoke test DAILY, success will come,
even when you're working on large projects!

What is the difference between system testing and integration testing?


"System testing" is a high level testing, and "integration testing" is a lower level testing. Integration testing is completed first, not the system testing. In other words,
upon completion of integration testing, system testing is started, and not vice versa.
For integration testing, test cases are developed with the express purpose of exercising the interfaces between the components. For system testing, the complete
system is configured in a controlled environment, and test cases are developed to simulate real life scenarios that occur in a simulated real life test environment.
The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. The purpose of system
testing is to validate an application's accuracy and completeness in performing the functions as designed, and to test all functions of the system that are required
in real life.
What is the difference between performance testing and load testing?
Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used
synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress
testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing.

What is the difference between reliability testing and load testing?


The term, reliability testing, is often used synonymously with load testing. Load testing is a blanket term that is used in many different ways across the professional
software testing community. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results,
though there is gray area in between stress testing and load testing.

What is the difference between volume testing and load testing?


The term, volume testing, is often used synonymously with load testing. Load testing is a blanket term that is used in many different ways across the professional
software testing community.
What types of testing can you tell me about?
Each of the followings represents a different type of testing: black box testing, white box testing, unit testing, incremental testing, integration testing, functional
testing, system testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load testing, performance testing, usability testing, install/uninstall
testing, recovery testing, security testing, compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison testing, alpha testing, beta
testing, and mutation testing.

What testing roles are standard on most testing projects?


A: Depending on the organization, the following roles are more or less standard on most testing projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA
Manager, System Administrator, Database Administrator, Technical Analyst, Test Build Manager and Test Configuration Manager.
Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of Technical Analyst, Test Build Manager
and Test Configuration Manager.

What is the role of documentation in QA?


Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection
reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there should be a system for easily
finding and obtaining of documents and determining what document will have a particular piece of information. Use documentation change management, if
possible.
How do you introduce a new software QA process?
It depends on the size of the organization and the risks involved. For large organizations with high-risk projects, a serious management buy-in is required and a
formalized QA process is necessary. For medium size organizations with lower risk projects, management and organizational buy-in and a slower, step-by-step
process is required. Generally speaking, QA processes should be balanced with productivity, in order to keep any bureaucracy from getting out of hand. For
smaller groups or projects, an ad-hoc process is more appropriate. A lot depends on team leads and managers, feedback to developers and good communication
is essential among customers, managers, developers, test engineers and testers. Regardless the size of the company, the greatest value for effort is in managing
requirement processes, where the goal is requirements that are clear, complete and testable.

Process and procedures ?


Detailed and well-written processes and procedures ensure the correct steps are being executed to facilitate a successful completion of a task. They also ensure a
process is repeatable.

When do you choose automated testing?


For larger projects, or ongoing long-term projects, automated testing can be valuable. But for small projects, the time needed to learn and implement the
automated testing tools is usually not worthwhile. Automated testing tools sometimes do not make testing easier. One problem with automated testing tools is that
if there are continual changes to the product being tested, the recordings have to be changed so often, that it becomes a very time-consuming task to continuously
update the scripts. Another problem with such tools is that the interpretation of the results (screens, data, logs, etc.) can be a time-consuming task.

Do automated testing tools make testing easier?


Yes and no.
For larger projects, or ongoing long-term projects, they can be valuable. But for small projects, the time needed to learn and implement them is usually not
worthwhile.
A common type of automated tool is the record/playback type. For example, a test engineer clicks through all combinations of menu choices, dialog box choices,
buttons, etc. in a GUI and has an automated testing tool record and log the results. The recording is typically in the form of text, based on a scripting language that
the testing tool can interpret.
If a change is made (e.g. new buttons are added, or some underlying code in the application is changed), the application is then re-tested by just playing back the
recorded actions and compared to the logged results in order to check effects of the change.
One problem with such tools is that if there are continual changes to the product being tested, the recordings have to be changed so often that it becomes a very
time-consuming task to continuously update the scripts.
Another problem with such tools is the interpretation of the results (screens, data, logs, etc.) that can be a time-consuming task.
Why are there so many software bugs?
Generally speaking, there are bugs in software because of unclear requirements, software complexity, programming errors, changes in requirements, errors made
in bug tracking, time pressure, poorly documented code and/or bugs in tools used in software development.
* There are unclear software requirements because there is miscommunication as to what the software should or shouldn't do.
* Software complexity. All of the followings contribute to the exponential growth in software and system complexity: Windows interfaces, client-server and
distributed applications, data communications, enormous relational databases and the sheer size of applications.
* Programming errors occur because programmers and software engineers, like everyone else, can make mistakes.
* As to changing requirements, in some fast-changing business environments, continuously modified requirements are a fact of life. Sometimes customers do not
understand the effects of changes, or understand them but request them anyway. And the changes require redesign of the software, rescheduling of resources and
some of the work already completed have to be redone or discarded and hardware requirements can be effected, too.
* Bug tracking can result in errors because the complexity of keeping track of changes can result in errors, too.
* Time pressures can cause problems, because scheduling of software projects is not easy and it often requires a lot of guesswork and when deadlines loom and
the crunch comes, mistakes will be made.
* Code documentation is tough to maintain and it is also tough to modify code that is poorly documented. The result is bugs. Sometimes there is no incentive for
programmers and software engineers to document their code and write clearly documented, understandable code. Sometimes developers get kudos for quickly
turning out code, or programmers and software engineers feel they cannot have job security if everyone can understand the code they write, or they believe if the
code was hard to write, it should be hard to read.
* Software development tools , including visual tools, class libraries, compilers, scripting tools, can introduce their own bugs. Other times the tools are poorly
documented, which can create additional bugs.

Give me five common problems that occur during software development.


Poorly written requirements, unrealistic schedules, inadequate testing, adding new features after development is underway and poor communication.
1. Requirements are poorly written when requirements are unclear, incomplete, too general, or not testable; therefore there will be problems.
2. The schedule is unrealistic if too much work is crammed in too little time.
3. Software testing is inadequate if none knows whether or not the software is any good until customers complain or the system crashes.
4. It's extremely common that new features are added after development is underway.
5. Miscommunication either means the developers don't know what is needed, or customers have unrealistic expectations and therefore problems are guaranteed.
What should be done after a bug is found?
When a bug is found, it needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested. Additionally,
determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn't create other
problems elsewhere. If a problem-tracking system is in place, it should encapsulate these determinations. A variety of commercial, problem-tracking/management
software tools are available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand
the bug, get an idea of its severity, reproduce it and fix it.
Give me five solutions to problems that occur during software development.
Solid requirements, realistic schedules, adequate testing, firm requirements and good communication.
1. Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and testable. All players should agree to requirements. Use prototypes to help
nail down requirements.
2. Have schedules that are realistic. Allow adequate time for planning, design, testing, bug fixing, re-testing, changes and documentation. Personnel should be
able to complete the project without burning out.
3. Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for sufficient time for both testing and bug fixing.
4. Avoid new features. Stick to initial requirements as much as possible. Be prepared to defend design against changes and additions, once development has
begun and be prepared to explain consequences. If changes are necessary, ensure they're adequately reflected in related schedule changes. Use prototypes early
on so customers' expectations are clarified and customers can see what to expect; this will minimize changes later on.
5. Communicate. Require walkthroughs and inspections when appropriate; make extensive use of e-mail, networked bug-tracking tools, tools of change
management. Ensure documentation is available and up-to-date. Do use documentation that is electronic, not paper. Promote teamwork and cooperation.

What if the software is so buggy it can't be tested at all?


In this situation the best bet is to have test engineers go through the process of reporting whatever bugs or problems initially show up, with the focus being on
critical bugs.
Since this type of problem can severely affect schedules and indicates deeper problems in the software development process, such as insufficient unit testing,
insufficient integration testing, poor design, improper build or release procedures, managers should be notified and provided with some documentation as evidence
of the problem.

What if there isn't enough time for thorough testing?


Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go
wrong, risk analysis is appropriate to most software development projects.
Use risk analysis to determine where testing should be focused. This requires judgment skills, common sense and experience. The checklist should include
answers to the following questions:
* Which functionality is most important to the project's intended purpose?
* Which functionality is most visible to the user?
* Which functionality has the largest safety impact?
* Which functionality has the largest financial impact on users?
* Which aspects of the application are most important to the customer?
* Which aspects of the application can be tested early in the development cycle?
* Which parts of the code are most complex and thus most subject to errors?
* Which parts of the application were developed in rush or panic mode?
* Which aspects of similar/related previous projects caused problems?
* Which aspects of similar/related previous projects had large maintenance expenses?
* Which parts of the requirements and design are unclear or poorly thought out?
* What do the developers think are the highest-risk aspects of the application?
* What kinds of problems would cause the worst publicity?
* What kinds of problems would cause the most customer service complaints?
* What kinds of tests could easily cover multiple functionalities?
* Which tests will have the best high-risk-coverage to time-required ratio?
What if the project isn't big enough to justify extensive testing?
Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the
considerations listed under "What if there isn't enough time for thorough testing?" do apply. The test engineer then should do "ad hoc" testing, or write up a limited
test plan based on the risk analysis.

What is a test engineer?


We, test engineers are engineers who specialize in testing. We, test engineers, create test cases, procedures, scripts and generate data. We execute test
procedures and scripts, analyze standards of measurements, evaluate results of system/integration/regression testing.
We also...
* Speed up the work of your development staff;
* Reduce your organization's risk of legal liability;
* Give you the evidence that your software is correct and operates properly;
* Improve your problem tracking and reporting;
* Maximize the value of your software;
* Maximize the value of the devices that use it;
* Assure the successful launch of your product by discovering bugs and design flaws, before users get discouraged, before shareholders loose their cool and
before employees get bogged down;
* Help the work of your development staff, so the development team can devote its time to build up your product;
* Promote continual improvement;
* Provide documentation required by FDA, FAA, other regulatory agencies and your customers;
* Save money by discovering defects 'early' in the design process, before failures occur in production, or in the field;
* Save the reputation of your company by discovering bugs and design flaws; before bugs and design flaws damage the reputation of your company.

What is the role of test engineers?


We, test engineers, speed up the work of your development staff, and reduce the risk of your company's legal liability. We give your company the evidence that the
software is correct and operates properly. We also improve your problem tracking and reporting. We maximize the value of your software, and the value of the
devices that use it. We also assure the successful launch of your product by discovering bugs and design flaws, before users get discouraged, before
shareholders loose their cool, and before your employees get bogged down. We help the work of your software development staff, so your development team can
devote its time to build up your product. We also promote continual improvement. We provide documentation required by FDA, FAA, other regulatory agencies,
and your customers. We save your company money by discovering defects EARLY in the design process, before failures occur in production, or in the field. We
save the reputation of your company by discovering bugs and design flaws, before bugs and design flaws damage the reputation of your company.
What is a QA engineer?
We, QA engineers, are test engineers but we do more than just testing. Good QA engineers understand the entire software development process and how it fits
into the business approach and the goals of the organization. Communication skills and the ability to understand various sides of issues are important. We, QA
engineers, are successful if people listen to us, if people use our tests, if people think that we're useful, and if we're happy doing our work. I would love to see QA
departments staffed with experienced software developers who coach development teams to write better code. But I've never seen it. Instead of coaching, we, QA
engineers, tend to be process people.

What is the role of a QA engineer?


The QA engineer's role is as follows: We, QA engineers, use the system much like real users would, find all the bugs, find ways to replicate the bugs, submit bug
reports to the developers, and provide feedback to the developers, i.e. tell them if they've achieved the desired level of quality
What are the responsibilities of a QA engineer?
Let's say, an engineer is hired for a small software company's QA role, and there is no QA team. Should he take responsibility to set up a QA
infrastructure/process, testing and quality of the entire product? No, because taking this responsibility is a classic trap that QA people get caught in. Why?
Because we QA engineers cannot assure quality. And because QA departments cannot create quality.
What we CAN do is to detect lack of quality, and prevent low-quality products from going out the door. What is the solution? We need to drop the QA label, and tell
the developers that they are responsible for the quality of their own work. The problem is, sometimes, as soon as the developers learn that there is a test
department, they will slack off on their testing. We need to offer to help with quality assessment, only.

What is a Test/QA Team Lead?


The Test/QA Team Lead coordinates the testing activity, communicates testing status to management and manages the test team.

What is the ratio of developers and testers?


The ratio of developers and testers is not a fixed one, but depends on what phase of the software development life cycle the project is in. When a product is first
conceived, organized, and developed, this ratio tends to be 10:1, 5:1, or 3:1, i.e. heavily in favor of developers. In sharp contrast, when the product is near the end
of the software development life cycle, just before alpha testing begins, this ratio tends to be 1:1, or even 1:2, in favor of testers.

Which of these roles are the best and most popular?


In testing, Tester roles tend to be the most popular. The less popular roles include the roles of System Administrator, Test/QA Team Lead, and Test/QA Managers.

What other roles are in testing?


Depending on the organization, the following roles are more or less standard on most testing projects: Testers, Test Engineers, Test/QA Team Leads, Test/QA
Managers, System Administrators, Database Administrators, Technical Analysts, Test Build Managers, and Test Configuration Managers.
Depending on the project, one person can and often wear more than one hat. For instance, we Test Engineers often wear the hat of Technical Analyst, Test Build
Manager and Test Configuration Manager as well.

What is a Test Build Manager??


Test Build Managers deliver current software versions to the test environment, install the application's software and apply software patches, to both the application
and the operating system, set-up, maintain and back up test environment hardware.
Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Test Build Manager.
What is a System Administrator?
Test Build Managers, System Administrators, Database Administrators deliver current software versions to the test environment, install the application's software
and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware.
Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a System Administrator.

What is a Database Administrator?


Test Build Managers, System Administrators and Database Administrators deliver current software versions to the test environment, install the application's
software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on
the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Database Administrator.

What is a Technical Analyst?


Technical Analysts perform test assessments and validate system/functional test requirements. Depending on the project, one person may wear more than one
hat. For instance, Test Engineers may also wear the hat of a Technical Analyst.

What is a Test Configuration Manager?


Test Configuration Managers maintain test environments, scripts, software and test data. Depending on the project, one person may wear more than one hat. For
instance, Test Engineers may also wear the hat of a Test Configuration Manager.

What makes a good test engineer?


Good test engineers have a "test to break" attitude. We, good test engineers, take the point of view of the customer, have a strong desire for quality and an
attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers and an ability to communicate with both technical and
non-technical people. Previous software development experience is also helpful as it provides a deeper understanding of the software development process, gives
the test engineer an appreciation for the developers' point of view and reduces the learning curve in automated test tool programming.

What makes a good QA engineer?


The same qualities a good test engineer has are useful for a QA engineer. Additionally, Rob Davis understands the entire software development process and how
it fits into the business approach and the goals of the organization. Rob Davis' communication skills and the ability to understand various sides of issues are
important. Good QA engineers understand the entire software development process and how it fits into the business approach and the goals of the organization.
Communication skills and the ability to understand various sides of issues are important.
What makes a good QA/Test Manager?
QA/Test Managers are familiar with the software development process; able to maintain enthusiasm of their team and promote a positive atmosphere; able to
promote teamwork to increase productivity; able to promote cooperation between Software and Test/QA Engineers, have the people skills needed to promote
improvements in QA processes, have the ability to withstand pressures and say *no* to other managers when quality is insufficient or QA processes are not being
adhered to; able to communicate with technical and non-technical people; as well as able to run meetings and keep them focused.

What about requirements?


Requirement specifications are important and one of the most reliable methods of insuring problems in a complex software project is to have poorly documented
requirement specifications. Requirements are the details describing an application's externally perceived functionality and properties. Requirements should be
clear, complete, reasonably detailed, cohesive, attainable and testable. A non-testable requirement would be, for example, "user-friendly", which is too subjective.
A testable requirement would be something such as, "the product shall allow the user to enter their previously-assigned password to access the application". Care
should be taken to involve all of a project's significant customers in the requirements process. Customers could be in-house or external and could include end-
users, customer acceptance test engineers, testers, customer contract officers, customer management, future software maintenance engineers, salespeople and
anyone who could later derail the project. If his/her expectations aren't met, they should be included as a customer, if possible. In some organizations,
requirements may end up in high-level project plans, functional specification documents, design documents, or other documents at various levels of detail. No
matter what they are called, some type of documentation with detailed requirements will be needed by test engineers in order to properly plan and execute tests.
Without such documentation there will be no clear-cut way to determine if a software application is performing correctly.

What can be done if requirements are changing continuously?


Work with management early on to understand how requirements might change, so that alternate test plans and strategies can be worked out in advance. It is
helpful if the application's initial design allows for some adaptability, so that later changes do not require redoing the application from scratch. Additionally, try to...
* Ensure the code is well commented and well documented; this makes changes easier for the developers.
* Use rapid prototyping whenever possible; this will help customers feel sure of their requirements and minimize changes.
* In the project's initial schedule, allow for some extra time to commensurate with probable changes.
* Move new requirements to a 'Phase 2' version of an application and use the original requirements for the 'Phase 1' version.
* Negotiate to allow only easily implemented new requirements into the project; move more difficult, new requirements into future versions of the application.
* Ensure customers and management understand scheduling impacts, inherent risks and costs of significant requirements changes. Then let management or the
customers decide if the changes are warranted; after all, that's their job.
* Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes.
* Design some flexibility into automated test scripts;
* Focus initial automated testing on application aspects that are most likely to remain unchanged;
* Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing needs;
* Design some flexibility into test cases; this is not easily done; the best bet is to minimize the detail in the test cases, or set up only higher-level generic-type test
plans;
* Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails
What if the application has functionality that wasn't in the requirements?
It may take serious effort to determine if an application has significant unexpected or hidden functionality, which it would indicate deeper problems in the software
development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies
that were not taken into account by the designer or the customer.
If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any
significant added risks as a result of the unexpected functionality. If the functionality only affects areas, such as minor improvements in the user interface, it may
not be a significant risk.

How do you know when to stop testing?


This can be difficult to determine. Many modern software applications are so complex and run in such an interdependent environment, that complete testing can
never be done. Common factors in deciding when to stop are...
* Deadlines, e.g. release deadlines, testing deadlines;
* Test cases completed with certain percentage passed;
* Test budget has been depleted;
* Coverage of code, functionality, or requirements reaches a specified point;
* Bug rate falls below a certain level; or
* Beta or alpha testing period ends.

How can software QA processes be implemented without stifling productivity?


Implement QA processes slowly over time. Use consensus to reach agreement on processes and adjust and experiment as an organization grows and matures.
Productivity will be improved instead of stifled. Problem prevention will lessen the need for problem detection. Panics and burnout will decrease and there will be
improved focus and less wasted effort.
At the same time, attempts should be made to keep processes simple and efficient, minimize paperwork, promote computer-based processes and automated
tracking and reporting, minimize time required in meetings and promote training as part of the QA process.
However, no one, especially talented technical types, like bureaucracy and in the short run things may slow down a bit. A typical scenario would be that more days
of planning and development will be needed, but less time will be required for late-night bug fixing and calming of irate customers.

Why do we recommended that we test during the design phase?


Because testing during the design phase can prevent defects later on. We recommend verifying three things...
1. Verify the design is good, efficient, compact, testable and maintainable.
2. Verify the design meets the requirements and is complete (specifies all relationships between modules, how to pass data, what happens in exceptional
circumstances, starting state of each module and how to guarantee the state of each module).
3. Verify the design incorporates enough memory, I/O devices and quick enough runtime for the final product
What if organization is growing so fast that fixed QA processes are impossible?
This is a common problem in the software industry, especially in new technology areas. There is no easy solution in this situation, other than...
* Hire good people
* Ruthlessly prioritize quality issues and maintain focus on the customer;
* Everyone in the organization should be clear on what quality means to the customer.

How is testing affected by object-oriented designs?


A well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. While there will be little
affect on black box testing (where an understanding of the internal design of the application is unnecessary), white-box testing can be oriented to the application's
objects. If the application was well designed this can simplify test design.
Standards and templates - what is supposed to be in a document?
All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. It also helps in learning where
information is located, making it easier for a user to find what they want. Lastly, with standards and templates, information will not be accidentally omitted from a
document.

What is configuration management?


Configuration management (CM) covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change
requests, designs, tools, compilers, libraries, patches, changes made to them and who makes the changes. Rob Davis has had experience with a full range of CM
tools and concepts, and can easily adapt to your software tool and process needs.

What is software configuration management?


Software Configuration Management (SCM) is the control and the recording of changes that are made to the software and documentation throughout the software
development life cycle (SDLC).
SCM covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools,
compilers, libraries, patches, and changes made to them, and to keep track of who makes the changes.
CMM and CMMI
CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model Integration'), developed by the SEI. It's a model of 5 levels of process
'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors.
However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMMI
ratings by undergoing assessments by qualified auditors.
Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Few if any processes in place;
successes may not be repeatable.
Level 2 - software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can
be repeated.
Level 3 - standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is is in
place to oversee software processes, and training programs are used to ensure understanding and compliance.
Level 4 - metrics are used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high.
Level 5 - the focus is on continouous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when
required.
Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of those, 27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at
5. (For ratings during the period 1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and 0.4% at 5.) The median size of organizations was 100 software
engineering/maintenance personnel; 32% of organizations were U.S. federal contractors or agencies. For those rated at Level 1, the most problematical key
process area was in Software Quality Assurance.

ISO
ISO = 'International Organisation for Standardization' - The ISO 9001:2000 standard (which replaces the previous standard of 1994) concerns quality systems that
are assessed by outside auditors, and it applies to many kinds of production and manufacturing organizations, not just software. It covers documentation, design,
development, production, testing, installation, servicing, and other processes. The full set of standards consists of: (a)Q9001-2000 - Quality Management Systems:
Requirements; (b)Q9000-2000 - Quality Management Systems: Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management Systems: Guidelines for
Performance Improvements. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after
which a complete reassessment is required. Note that ISO certification does not necessarily indicate quality products - it indicates only that documented processes
are followed. Also see http://www.iso.ch/ for the latest information. In the U.S. the standards can be purchased via the ASQ web site at http://e-standards.asq.org/

IEEE
IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates standards such as 'IEEE Standard for Software Test Documentation'
(IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software Quality Assurance Plans'
(IEEE/ANSI Standard 730), and others.

ANSI
ANSI = 'American National Standards Institute', the primary industrial standards body in the U.S.; publishes some software-related standards in conjunction with
the IEEE and ASQ (American Society for Quality). Other software development/IT management process assessment methods besides CMMI and ISO 9000
include SPICE, Trillium, TickIT, Bootstrap, ITIL, MOF, and CobiT.

SEI
SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes.
WinRunner: Should I sign up for a course at a nearby educational institution?
When you're employed, the cheapest or free education is sometimes provided on the job, by your employer, while you are getting paid to do a job that requires the
use of WinRunner and many other software testing tools.
If you're employed but have little or no time, you could still attend classes at nearby educational institutions.
If you're not employed at the moment, then you've got more time than everyone else, so that's when you definitely want to sign up for courses at nearby
educational institutions. Classroom education, especially non-degree courses in local community colleges, tends to be cheap.

I don't have a lot of money. How can I become a good tester?


If you don't have a lot of money, but want to become a good tester, the cheapest or free education is sometimes provided on the job, by an employer, while you're
getting paid to do a job that requires the use of WinRunner and many other software testing tools.

Which of these tools should I learn?


Learn ALL you can! Learn all the tools that you are able to master! Ideally, this will include some of the most popular software tools, i.e. LabView, LoadRunner,
Rational Tools, WinRunner, SilkTest etc.

What are some of the software configuration management tools?


Software configuration management tools include Rational ClearCase, DOORS, PVCS, CVS; and there are many others.
Rational ClearCase is a popular software tool for revision control of source code. Made by Rational Software.
DOORS, or "Dynamic Object Oriented Requirements System", is a requirements version control software tool.
CVS, or "Concurrent Version System", is another popular, open source version control tool. It keeps track of changes in documents associated with software
projects. It enables several, often distant, developers to work together on the same source code.
PVCS is a document version control tool, a competitor of SCCS. SCCS is an original UNIX program, based on "diff". Diff is a UNIX command that compares
contents of two files.
What is documentation change management?
Documentation change management is part of configuration management (CM). CM covers the tools and processes used to control, coordinate and track code,
requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, changes made to them and who makes the changes.

What is up time?
"Up time" is the time period when a system is operational and in service. Up time is the sum of busy time and idle time. For example, if, out of 168 hours, a system
has been busy for 50 hours, idle for 110 hours, and down for 8 hours, then the busy time is 50 hours, idle time is 110 hours, and up time is (110 + 50 =) 160 hours.

What is upwardly compatible software?


"Upwardly compatible software" is software that is compatible with a later or more complex version of itself. For example, an upwardly compatible software is able
to handle files created by a later version of itself.

What is upward compression?


A: In software design, "upward compression" means a form of demodularization in which a subordinate module is copied into the body of a superior module.

What is user documentation?


"User documentation" is a document that describes the way a software product or system should be used to obtain the desired results.

What is a user manual?


A "user manual" is a document that presents information necessary to employ software or a system to obtain the desired results. Typically, what is described are
system and component capabilities, limitations, options, permitted inputs, expected outputs, error messages, and special instructions.
What is the difference between user documentation and user manual?
When a distinction is made between those who operate and use a computer system for its intended purpose, a separate user documentation and user manual is
created. Operators get user documentation, and users get user manuals.

What is a user friendly software?


A computer program is "user friendly", when it is designed with ease of use, as one of the primary objectives of its design.

What is a user friendly document?


A document is user friendly, when it is designed and written with ease of use, as one of the primary objectives of its design.

What is a user guide?


The "user guide" is the same as the user manual. The user guide is a document that presents information necessary to employ a system or component to obtain
the desired results. Typically, what is described are system and component capabilities, limitations, options, permitted inputs, expected outputs, error messages,
and special instructions.

Interface A shared boundary. An interface might be a hardware component to link two devices, or it might be a portion of storage or registers accessed by two or
more computer programs.

Interface Analysis Checks the interfaces between program elements for consistency and adherence to predefined rules or axioms.

What is a user interface?


"User interface" is the interface between a human user and a computer system. It enables the passage of information between a human user and hardware or
software components of a computer system.

What is a utility?
"Utility" is a software tool designed to perform some frequently used support function. For example, one utility is a program to print files.
What is utilization?
"Utilization" is the ratio of time a system is busy (i.e. working for us), divided by the time it is available. For example, if a system was available for 160 hours and
busy for 40 hours, then utilization was (40/160 =) 25 per cent. Utilization is a useful measure in evaluating computer performance.

What is variable trace?


"Variable trace" is a (computer) record of the names and the values of variables accessed and/or changed during the execution of a computer program.

What is value trace?


"Value trace" is same as variable trace. It is a (computer) record of the names and values of variables accessed and/or changed during the execution of a
computer program.

What is a variable?
"Variables" are data items in a program whose values can change. There are local and global variables. One example is a variable we have named
"capacitor_voltage_10000", where "capacitor_voltage_10000" can be any whole number between -10000 and +10000.

What is a variant?
"Variants" are versions of a program. Variants result from the application of software diversity.

What is a software version?


A software version is an initial release (or re-release) of a software associated with a complete compilation (or recompilation) of the software
What is a document version?
A document version is an initial release (or complete re-release) of a document, as opposed to a revision resulting from issuing change pages to a previous
release.

What is VDD?
"VDD" is an acronym that stands for "version description document".

What is a version description document (VDD)?


Version description document (VDD) is a document that accompanies and identifies a given version of a software product. Typically the VDD includes the
description and identification of the software, identification of the changes incorporated into this version, and the installation and operating information unique to
this version of the software.

What is a vertical microinstruction?


A vertical microinstruction is a microinstruction that specifies one of a sequence of operations needed to carry out a machine language instruction. Vertical
microinstructions are short, 12 to 24 bit instructions. They're called vertical because they are normally listed vertically on a page. These 12 to 24 bit
microinstructions instructions are required to carry out a single machine language instruction. In addition to vertical microinstructions, there are horizontal and
diagonal microinstructions as well.

What is a virtual address?


In virtual storage systems, virtual addresses are assigned to auxiliary storage locations. The use of virtual addresses allow those locations to be accessed as
though they were part of the main storage.

What is a virtual memory?


Virtual memory relates to virtual storage. In virtual storage, portions of a user's program and data are placed in auxiliary storage, and the operating system
automatically swaps them in and out of main storage as needed.

What is a waiver?
In software QA, a waiver is an authorization to accept software that has been submitted for inspection, found to depart from specified requirements, but is
nevertheless considered suitable for use "as is", or after rework by an approved method.
What is a waterfall model?
Waterfall is a model of the software development process in which the concept phase, requirements phase, design phase, implementation phase, test phase,
installation phase, and checkout phase are performed in that order, probably with overlap, but with little or no iteration.

How do you conduct peer reviews?


The peer review, sometimes called PDR, is a formal meeting, more formalized than a walk-through, and typically consists of 3-10 people including the test lead,
task lead (the author of whatever is being reviewed) and a facilitator (to make notes). The subject of the PDR is typically a code block, release, or feature, or
document. The purpose of the PDR is to find problems and see what is missing, not to fix anything. The result of the meeting is documented in a written report.
Attendees should prepare for PDRs by reading through documents, before the meeting starts; most problems are found during this preparation.
Why is the PDR great? Because it is a cost-effective method of ensuring quality, because bug prevention is more cost effective than bug detection.

How do you check the security of an application?


To check the security of an application, one can use security/penetration testing. Security/penetration testing is testing how well a system is protected against
unauthorized internal, or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.

How do you test the password field?


To test the password field, we do boundary value testing.

When testing the password field, what is your focus?


When testing the password field, one needs to focus on encryption; one needs to verify that the passwords are encrypted.

What is your view of software QA/testing?


Software QA/testing is easy, if requirements are solid, clear, complete, detailed, cohesive, attainable and testable, and if schedules are realistic, and if there is
good communication in the group.
Software QA/testing is a piece of cake, if project schedules are realistic, if adequate time is allowed for planning, design, testing, bug fixing, re-testing, changes,
and documentation.
Software QA/testing is easy, if testing is started early on, if fixes or changes are re-tested, and sufficient time is planned for both testing and bug fixing.
Software QA/testing is easy, if new features are avoided, if one sticks to initial requirements as much as possible
How can I be a good tester?
We, good testers, take the customers' point of view. We are also tactful and diplomatic. We have a "test to break" attitude, a strong desire for quality, an attention
to detail, and good communication skills, both oral and written. Previous software development experience is also helpful, as it provides a deeper understanding of
the software development process.

How do you compare two files?


Generally speaking, when we write a software program to compare files, we compare two files, bit by bit. For example, when we use "diff", a UNIX utility, we
compare two text files, and we compare them bit by bit.

What do we use for comparison?


We can use "diff", a UNIX utility, to compare two text files. Generally speaking, when we write a software program to compare files, we compare two files, bit by bit.

What is the reason we compare files?


We compare files because of configuration management, revision control, requirement version control, or document version control. Examples are Rational
ClearCase, DOORS, PVCS, and CVS. CVS, for example, enables several, often distant, developers to work together on the same source code.
When is a process repeatable?
A process repeatable when we use detailed and well-written processes and procedures; this way we ensure the correct steps are being executed. This also
facilitates a successful completion of the task, and ensures the process is repeatable.

How can I start my career in automated testing?


To start your career in automated testing:
1. Read all you can, and that includes reading product descriptions, pamphlets, manuals, books, information on the Internet, and whatever information you can lay
your hands on.
2. Get some hands on experience in using automated testing tools. e.g. WinRunner and many other automated testing tools.
What is PDR?
PDR is an acronym. In the world of software QA or testing, it stands for "peer design review", informally known as "peer review".

What is good about PDRs?


PDRs are informal meetings, and I do like all informal meetings. PDRs make perfect sense, because they're for the mutual benefit of you and your end client.
Your end client requires a PDR, because they work on a product, and want to come up with the very best possible design and documentation. Your end client
requires you to have a PDR, because when you organize a PDR, you invite and assemble the end client's best experts and encourage them to voice their
concerns as to what should or should not go into the design and documentation, and why.
When you're a developer, designer, author, or writer, it's also to your advantage to come up with the best possible design and documentation. Therefore you want
to embrace the idea of the PDR, because holding a PDR gives you a significant opportunity to invite and assemble the end client's best experts and make them
work for you for one hour, for your own benefit. To come up with the best possible design and documentation, you want to encourage your end client's experts to
speak up and voice their concerns as to what should or should not go into your design and documentation, and why.

Why is that my company requires a PDR?


Your company requires a PDR, because your company wants to be the owner of the very best possible design and documentation. Your company requires a PDR,
because when you organize a PDR, you invite, assemble and encourage the company's best experts to voice their concerns as to what should or should not go
into your design and documentation, and why.
Please don't be negative. Please do not assume your company is finding fault with your work, or distrusting you in any way. Remember, PDRs are not about you,
but about design and documentation. There is a 90+ per cent probability your company wants you, likes you and trust you because you're a specialist, and
because your company hired you after a long and careful selection process.
Your company requires a PDR, because PDRs are useful and constructive. Just about everyone - even corporate chief executive officers (CEOs) - attend PDRs
from time to time. When a corporate CEO attends a PDR, he has to listen for "feedback" from shareholders. When a CEO attends a PDR, the meeting is called the
"annual shareholders' meeting".

Give me a list of ten good things about PDRs!


Number 1: PDRs are easy, because all your meeting attendees are your co-workers and friends.
Number 2: PDRs do produce results. With the help of your meeting attendees, PDRs help you produce better designs and better documents than the ones you
could come up with, without the help of your meeting attendees.
Number 3: Preparation for PDRs helps a lot, but, in the worst case, if you had no time to read every page of every document, it's still OK for you to show up at the
PDR.
Number 4: It's technical expertise that counts the most, but many times you can influence your group just as much, or even more so, if you're dominant or have
good acting skills.
Number 5: PDRs are easy, because, even at the best and biggest companies, you can dominate the meeting by being either very negative, or very bright and
wise.
Number 6: It is easy to deliver gentle suggestions and constructive criticism. The brightest and wisest meeting attendees are usually gentle on you; they deliver
gentle suggestions that are constructive, not destructive.
Number 7: You get many-many chances to express your ideas, every time a meeting attendee asks you to justify why you wrote what you wrote.
Number 8: PDRs are effective, because there is no need to wait for anything or anyone; because the attendees make decisions quickly (as to what errors are in
your document). There is no confusion either, because all the group's recommendations are clearly written down for you by the PDR's facilitator.
Number 9: Your work goes faster, because the group itself is an independent decision making authority. Your work gets done faster, because the group's decisions
are subject to neither oversight nor supervision.
Number 10: At PDRs, your meeting attendees are the very best experts anyone can find, and they work for you, for FREE!
What is the exit criteria?
The "exit criteria" is a checklist, sometimes known as the "PDR sign-off sheet". It is a list of peer design review related tasks that have to be done by the facilitator
or attendees of the PDR, either during or near the conclusion of the PDR.
By having a checklist, and by going through the checklist, the facilitator can verify that A) all attendees have inspected all the relevant documents and reports, B)
all suggestions and recommendations for each issue have been recorded, and C) all relevant facts of the meeting have been recorded.
The facilitator's checklist includes the following questions:
* Have we inspected all the relevant documents, code blocks, or products?
* Have we completed all the required checklists?
* Have I recorded all the facts relevant to this peer review?
* Does anyone have any additional suggestions, recommendations, or comments?
* What is the outcome of this peer review?
As the end of the PDR, the facilitator asks the attendees to make a decision as to the outcome of the PDR, i.e. "What is our consensus... are we accepting the
design (or document or code)?" Or, "Are we accepting it with minor modifications?" Or, "Are we accepting it after it has been modified and approved through e-
mails to the attendees?" Or, "Do we want another peer review?" This is a phase, during which the attendees work as a committee, and the committee's decision is
final.

What are the parameters of peer reviews?


By definition, parameters are values on which something else depends. Peer reviews (also known as PDRs) depend on the attendance and active participation of
several key people; the facilitator, task lead, test lead, and at least one additional reviewer.
The attendance of these four people are usually required for the approval of the PDR. According to company policy, depending on your company, other
participants are often invited, but generally not required for approval.
PDRs depend on the facilitator, sometimes known as the moderator, who controls the meeting, keeps the meeting on schedule, and records all suggestions from
all attendees.
PDRs greatly depend on the developer, also known as the designer, author, or task lead -- usually a software engineer -- who is most familiar with the project, and
most likely able to answer any questions or address any concerns that may come up during the PFR.
PDRs greatly depend on the tester, also known as test lead, or bench test person -- usually another software engineer -- who is also familiar with the project, and
most likely able to answer any questions or address any concers that may come up during the PDR.
PDRs greatly depend on the participation of additional reviewers and additional attendees who often make specific suggestions and recommendations, and ask
the largest number of questions.

How can I shift my focus and area of work from QC to QA?


Number one: Focus on your strengths, skills, and abilities! Realize that there are MANY similarities between Quality Control and Quality Assurance! Realize you
have MANY transferable skills!
Number two: Make a plan! Develop a belief that getting a job in QA is easy! HR professionals cannot tell the difference between quality control and quality
assurance! HR professionals tend to respond to keywords (i.e. QC and QA), without knowing the exact meaning of those keywords!
Number three: Make it a reality! Invest your time! Get some hands-on experience! Do some QA work! Do any QA work, even if, for a few months, you get paid a
little less than usual! Your goals, beliefs, enthusiasm, and action will make a huge difference in your life!
Number four: Read all you can, and that includes reading product pamphlets, manuals, books, information on the Internet, and whatever information you can lay
your hands on!
What techniques and tools can enable me to migrate from QC to QA?
Technique number one: Mental preparation. Understand and believe what you want is not unusual at all! Develop a belief in yourself! Start believing what you want
is attainable! You can change your career! Every year, millions of men and women change their careers successfully!
Number two: Make a plan! Develop a belief that getting a job in QA is easy! HR professionals cannot tell the difference between quality control and quality
assurance! HR professionals tend to respond to keywords (i.e. QC and QA), without knowing the exact meaning of those keywords!
Number three: Make it a reality! Invest your time! Get some hands-on experience! Do some QA work! Do any QA work, even if, for a few months, you get paid a
little less than usual! Your goals, beliefs, enthusiasm, and action will make a huge difference in your life!
Number four: Read all you can, and that includes reading product pamphlets, manuals, books, information on the Internet, and whatever information you can lay
your hands on!

What is the difference between build and release?


Builds and releases are similar, because both builds and releases are end products of software development processes. Builds and releases are similar, because
both builds and releases help developers and QA teams to deliver reliable software.
A build is a version of a software; typically one that is still in testing. A version number is usually given to a released product, but sometimes a build number is used
instead.
Difference number one: "Build" refers to software that is still in testing, but "release" refers to software that is usually no longer in testing.
Difference number two: "Builds" occur more frequently; "releases" occur less frequently.
Difference number three: "Versions" are based on "builds", and not vice versa. Builds (or a series of builds) are generated first, as often as one build per every
morning (depending on the company), and then every release is based on a build (or several builds), i.e. the accumulated code of several builds.

What is the difference between version and release?


Both version and release indicate particular points in the software development life cycle, or in the life cycle of a document. Both terms, version and release, are
similar, i.e. pretty much the same thing, but there are minor differences between them.
Minor difference number 1: Version means a variation of an earlier or original type. For example, you might say, "I've downloaded the latest version of XYZ
software from the Internet. The version number of this software is _____"
Minor difference number 2: Release is the act or instance of issuing something for publication, use, or distribution. Release means something thus released. For
example, "Microsoft has just released their brand new gaming software known as _______"
What is data integrity?
Data integrity is one of the six fundamental components of information security. Data integrity is the completeness, soundness, and wholeness of the data that also
complies with the intention of the creators of the data.
In databases, important data - including customer information, order database, and pricing tables - may be stored. In databases, data integrity is achieved by
preventing accidental, or deliberate, or unauthorized insertion, or modification, or destruction of data.

How do you test data integrity?


Data integrity is tested by the following tests:
Verify that you can create, modify, and delete any data in tables.
Verify that sets of radio buttons represent fixed sets of values.
Verify that a blank value can be retrieved from the database.
Verify that, when a particular set of data is saved to the database, each value gets saved fully, and the truncation of strings and rounding of numeric values do not
occur.
Verify that the default values are saved in the database, if the user input is not specified.
Verify compatibility with old data, old hardware, versions of operating systems, and interfaces with other software.
Why do we perform data integrity testing? Because we want to verify the completeness, soundness, and wholeness of the stored data. Testing should be
performed on a regular basis, because important data could, can, and will change over time.

What is data validity?


Data validity is the correctness and reasonablenesss of data. Reasonableness of data means that, for example, account numbers falling within a range, numeric
data being all digits, dates having a valid month, day and year, and spelling of proper names. Data validity errors are probably the most common, and most difficult
to detect (data-related) errors.
What causes data validity errors? Data validity errors are usually caused by incorrect data entries, when a large volume of data is entered in a short period of time.
For example, a data entry operator enters 12/25/2010 as 13/25/2010, by mistake, and this data is therefore invalid. How can you reduce data validity errors? You
can use one of the following two, simple field validation techniques.
Technique 1: If the date field in a database uses the MM/DD/YYYY format, then you can use a program with the following two data validation rules: "MM" should
not exceed "12", and "DD" should not exceed "31".
Technique 2: If the original figures do not seem to match the ones in the database, then you can use a program to validate data fields. You can compare the sum
of the numbers in the database data field to the original sum of numbers from the source. If there is a difference between the two figures, it is an indication of an
error in at least one data element.

What is the difference between data validity and data integrity?


Difference number one: Data validity is about the correctness and reasonableness of data, while data integrity is about the completeness, soundness, and
wholeness of the data that also complies with the intention of the creators of the data.
Difference number two: Data validity errors are more common, and data integrity errors are less common.
Difference number three: Errors in data validity are caused by human beings - usually data entry personnel - who enter, for example, 13/25/2010, by mistake, while
errors in data integrity are caused by bugs in computer programs that, for example, cause the overwriting of some of the data in the database, when somebody
attempts to retrieve a blank value from the database.
What is TestDirector?
TestDirector®, also known as Mercury TestDirector®, is a software tool made for software QA professionals. Mercury TestDirector®, as the name implies, is a
product made by Mercury Interactive Corporation, 379 North Whisman Road, Mountain View, California 94043 USA.
Mercury's other products include the Mercury QuickTest Professional™, Mercury WinRunner™, also known as WinRunner™, and Mercury Business Process
Testing™.

Tell me about the TestDirector®


The TestDirector® is a software tool that helps software QA professionals to gather requirements, to plan, schedule and run tests, and to manage and track
defects/issues/bugs. It is a single browser-based application that streamlines the software QA process.
The TestDirector's "Requirements Manager" links test cases to requirements, ensures traceability, and calculates what percentage of the requirements are covered
by tests, how many of these tests have been run, and how many have passed or failed.
As to planning, the test plans can be created, or imported, for both manual and automated tests. The test plans then can be reused, shared, and preserved.
The TestDirector’s "Test Lab Manager" allows you to schedule tests to run unattended, or run even overnight.
The TestDirector's "Defect Manager" supports the entire bug life cycle, from initial problem detection through fixing the defect, and verifying the fix.
Additionally, the TestDirector can create customizable graphs and reports, including test execution reports and release status assessments.

What is the difference between static and dynamic testing?


Difference number 1: Static testing is about prevention, dynamic testing is about cure.
Difference number 2: The static tools offer greater marginal benefits.
Difference number 3: Static testing is many times more cost-effective than dynamic testing.
Difference number 4: Static testing beats dynamic testing by a wide margin.
Difference number 5: Static testing is more effective!
Difference number 6: Static testing gives you comprehensive diagnostics for your code.
Difference number 7: Static testing achieves 100% statement coverage in a relatively short time, while dynamic testing often often achieves less than 50%
statement coverage, because dynamic testing finds bugs only in parts of the code that are actually executed.
Difference number 8: Dynamic testing usually takes longer than static testing. Dynamic testing may involve running several test cases, each of which may take
longer than compilation.
Difference number 9: Dynamic testing finds fewer bugs than static testing.
Difference number 10: Static testing can be done before compilation, while dynamic testing can take place only after compilation and linking.
Difference number 11: Static testing can find all of the followings that dynamic testing cannot find: syntax errors, code that is hard to maintain, code that is hard to
test, code that does not conform to coding standards, and ANSI violations.
What testing tools should you use?
Ideally, you should use both static and dynamic testing tools. To maximize software reliability, you should use both static and dynamic techniques, supported by
appropriate static and dynamic testing tools.
Reason number 1: Static and dynamic testing are complementary. Static and dynamic testing find different classes of bugs. Some bugs are detectable only by
static testing, some only by dynamic.
Reason number 2: Dynamic testing does detect some errors that static testing misses. To eliminate as many errors as possible, both static and dynamic testing
should be used.
Reason number 3: All this static testing (i.e. testing for syntax errors, testing for code that is hard to maintain, testing for code that is hard to test, testing for code
that does not conform to coding standards, and testing for ANSI violations) takes place before compilation.
Reason number 4: Static testing takes roughly as long as compilation and checks every statement you have written.

Why should I use static testing techniques?


There are several reasons why one should use static testing techniques.
Reason number 1: One should use static testing techniques because static testing is a bargain, compared to dynamic testing.
Reason number 2: Static testing is up to 100 times more effective. Even in selective testing, static testing may be up to 10 times more effective. The most
pessimistic estimates suggest a factor of 4.
Reason number 3: Since static testing is faster and achieves 100% coverage, the unit cost of detecting these bugs by static testing is many times lower than
detecting bugs by dynamic testing.
Reason number 4: About half of the bugs, detectable by dynamic testing, can be detected earlier by static testing.
Reason number 5: If one uses neither static nor dynamic test tools, the static tools offer greater marginal benefits.
Reason number 6: If an urgent deadline looms on the horizon, the use of dynamic testing tools can be omitted, but tool-supported static testing should never be
omitted.

How can I get registered and licensed as a professional engineer?


To get registered and licensed as a professional engineer, you have to be a legal resident of the jurisdiction where you submit your application. You have to be at
least 18, trustworthy, and with no criminal record. You have to have at the very least a bachelor's degree in engineering, from an established, recognized, and
approved university. You have to provide two references who are licensed and professional engineers. Then you have to work for a few years as an "engineer in
training", under the supervision of a registered and licensed professional engineer. You also have to pass a test of competence in both your engineering discipline
and professional ethics. In my experience and that of others, the biggest hurdle in getting a registration and license seems to be either the lack of a university
degree in engineering, or the lack of an acceptable, verifiable work experience under the supervision of a licensed, professional engineer.
What is the definiton of top down design?
Top down design progresses from simple design to detailed design. Top down design solves problems by breaking them down into smaller, easier to solve
subproblems. Top down design creates solutions to these smaller problems, and then tests them using test drivers. In other words, top down design starts the
design process with the main module or system, then progresses down to lower level modules and subsystems. To put it differently, top down design looks at the
whole system, and then explodes it into subsystems, or smaller parts. A systems engineer or systems analyst determines what the top level objectives are, and
how they can be met. He then divides the system into subsystems, i.e. breaks the whole system into logical, manageable-size modules, and deals with them
individually.

What is the future of software QA/testing?


In software QA/testing, employers increasingly want us to have a combination of technical, business, and personal skills. By technical skills they mean skills in IT,
quantitative analysis, data modeling, and technical writing. By business skills they mean skills in strategy and business writing. By personal skills they mean
personal communication, leadership, teamwork, and problem-solving skills. We, employees, on the other hand, want increasingly more autonomy, better lifestyle,
increasingly more employee oriented company culture, and better geographic location. We continue to enjoy relatively good job security and, depending on the
business cycle, many job opportunities. We realize our skills are important, and have strong incentives to upgrade our skills, although sometimes lack the
information on how to do so. Educational institutions increasingly ensure that we are exposed to real-life situations and problems, but high turnover rates and a
rapid pace of change in the IT industry often act as strong disincentives for employers to invest in our skills, especially non-company specific skills. Employers
continue to establish closer links with educational institutions, both through in-house education programs and human resources. The share of IT workers with IT
degrees keeps increasing. Certification continues to keep helping employers to quickly identify us with the latest skills. During boom times, smaller and younger
companies continue to be the most attractive to us, especially those that offer stock options and performance bonuses in order to retain and attract those of us
who are the most skilled. High turnover rates continue to be the norm, especially during economic boom. Software QA/testing continues to be outsourced to
offshore locations. Software QA/testing continues to be performed by mostly men, but the share of women keeps increasing.

How can I be effective and efficient, when I'm testing e-commerce web sites?
When you're doing black box testing of an e-commerce web site, you're most efficient and effective when you're testing the site's visual appeal, content, and home
page. When you want to be effective and efficient, you need to verify that the site is well planned; verify that the site is customer-friendly; verify that the choices of
colors are attractive; verify that the choices of fonts are attractive; verify that the site's audio is customer friendly; verify that the site's video is attractive; verify that
the choice of graphics is attractive; verify that every page of the site is displayed properly on all the popular browsers; verify the authenticity of facts; ensure the
site provides reliable and consistent information; test the site for appearance; test the site for grammatical and spelling errors; test the site for visual appeal, choice
of browsers, consistency of font size, download time, broken links, missing links, incorrect links, and browser compatibility; test each toolbar, each menu item,
every window, every field prompt, every pop-up text, and every error message; test every page of the site for left and right justifications, every shortcut key, each
control, each push button, every radio button, and each item on every drop-down menu; test each list box, and each help menu item. Also check, if the command
buttons are grayed out when they're not in use.
What is a backward compatible design?
The design is backward compatible, if the design continues to work with earlier versions of a language, program, code, or software. When the design is backward
compatible, the signals or data that has to be changed does not break the existing code.
For instance, a (mythical) web designer decides he should make some changes, because the fun of using Javascript and Flash is more important (to his
customers) than his backward compatible design. Or, alternatively, he decides, he has to make some changes because he doesn't have the resources to maintain
multiple styles of backward compatible web design. Therefore, our mythical web designer's decision will inconvenience some users, because some of the earlier
versions of Internet Explorer and Netscape will not display his web pages properly (as there are some serious improvements in the newer versions of Internet
Explorer and Netscape that make the older versions of these browsers incompatible with, for example, DHTML). This is when we say, "Our (mythical) web
designer's code fails to work with earlier versions of browser software, therefore his design is not backward compatible".
On the other hand, if the same mythical web designer decides that backward compatibility is more important than fun, or, if he decides that he does have the
resources to maintain multiple styles of backward compatible code, then, obviously, no user will be inconvenienced when Microsoft or Netscape make some
serious improvements in their web browsers. This is when we can say, "Our mythical web designer's design is backward compatible".

What is the difference between top down and bottom up design?


Top down design proceeds from the abstract entity to get to the concrete design. Bottom up design proceeds from the concrete design to get to the abstract entity.
Top down design is most often used in designing brand new systems, while bottom up design is sometimes used when one is reverse engineering a design; i.e.
when one is trying to figure out what somebody else designed in an existing system.
Bottom up design begins the design with the lowest level modules or subsystems, and progresses upward to the main program, module, or subsystem. With
bottom up design, a structure chart is necessary to determine the order of execution, and the development of drivers is necessary to complete the bottom up
approach.
Top down design, on the other hand, begins the design with the main or top-level module, and progresses downward to the lowest level modules or subsystems.
Real life sometimes is a combination of top down design and bottom up design. For instance, data modeling sessions tend to be iterative, bouncing back and forth
between top down and bottom up modes, as the need arises.

When is a process repeatable?


A process is repeatable, whenever we have the necessary processes in place, in order to repeat earlier successes on projects with similar applications. A process
is repeatable, if we use detailed and well-written processes and procedures. A process is repeatable, if we ensure that the correct steps are executed.
When the correct steps are executed, we facilitate a successful completion of the task. Documentation is critical. A software process is repeatable, if there are
requirements management, project planning, project tracking, subcontract management, QA, and configuration management.
Both QA processes and practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations,
code changes, test plans, test cases, bug reports, user manuals should all be documented, so that they are repeatable.
Document files should be well organized. There should be a system for easily finding and obtaining documents, and determining what document has a particular
piece of information. We should use documentation change management, if possible.
Give me one test case that catches all the bugs!
On the negative side, if there was a "magic bullet", i.e. the one test case that was able to catch ALL the bugs, or at least the most important bugs, it'd be a
challenge to find it, because test cases depend on requirements; requirements depend on what customers need; and customers have great many different needs
that keep changing. As software systems are changing and getting increasingly complex, it is increasingly more challenging to write test cases.
On the positive side, there are ways to create "minimal test cases" which can greatly simplify the test steps to be executed. But, writing such test cases is time
consuming, and project deadlines often prevent us from going that route. Often the lack of enough time for testing is the reason for bugs to occur in the field.
However, even with ample time to catch the "most important bugs", bugs still surface with amazing spontaneity. The fundamental challenge is, developers do not
seem to know how to avoid providing the many opportunities for bugs to hide, and testers do not seem to know where the bugs are hiding.

What is a parameter?
In software QA or software testing, a parameter is an item of information - such as a name, number, or selected option - that is passed to a program, by a user or
another program. By definition, in software, a parameter is a value on which something else depends. Any desired numerical value may be given as a parameter.
In software development, we use parameters when we want to allow a specified range of variables. We use parameters when we want to differentiate behavior or
pass input data to computer programs or their subprograms. Thus, when we are testing, the parameters of the test can be varied to produce different results,
because parameters do affect the operation of the program receiving them.
Example 1: We use a parameter, such as temperature, that defines a system. In this definition, it is temperature that defines the system and determines its
behavior.
Example 2: In the definition of function f(x) = x + 10, x is a parameter. In this definition, x defines the f(x) function and determines its behavior. Thus, when we are
testing, x can be varied to make f(x) produce different values, because the value of x does affect the value of f(x).
When parameters are passed to a function subroutine, they are called arguments.

What is a constant?
In software or software testing, a constant is a meaningful name that represents a number, or string, that does not change. Constants are variables that remain the
same, i.e. constant, throughout the execution of a program.
Why do we, developers, use constants? Because if we have code that contains constant values that keep reappearing, or, if we have code that depends on certain
numbers that are difficult to remember, we can improve both the readability and maintainability of our code, by using constants.
To give you an example, we declare a constant and we call it "Pi". We set it to 3.14159265, and use it throughout our code. Constants, such as Pi, as the name
implies, store values that remain constant throughout the execution of our program.
Keep in mind that, unlike variables which can be read from and written to, constants are read-only. Although constants resemble variables, we cannot modify or
assign new values to them, as we can to variables, but we can make constants public or private. We can also specify what data type they are
What testing approaches can you tell me about?
Each of the followings represents a different testing approach: black box testing, white box testing, unit testing, incremental testing, integration testing, functional
testing, system testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load testing, performance testing, usability testing, install/uninstall
testing, recovery testing, security testing, compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison testing, alpha testing, beta
testing, and mutation testing.

Can you give me five common problems?


Poorly written requirements, unrealistic schedules, inadequate testing, adding new features after development is underway and poor communication.
Requirements are poorly written when they're unclear, incomplete, too general, or not testable; therefore there will be problems.
The schedule is unrealistic if too much work is crammed in too little time.
Software testing is inadequate if none knows whether or not the software is any good until customers complain or the system crashes.
It's extremely common that new features are added after development is underway.
Miscommunication either means the developers don't know what is needed, or customers have unrealistic expectations and therefore problems are guaranteed.

Can you give me five common solutions?


Solid requirements, realistic schedules, adequate testing, firm requirements, and good communication.
Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and testable. All players should agree to requirements. Use prototypes to help
nail down requirements.
Have schedules that are realistic. Allow adequate time for planning, design, testing, bug fixing, re-testing, changes and documentation. Personnel should be able
to complete the project without burning out.
Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for sufficient time for both testing and bug fixing.
Avoid new features. Stick to initial requirements as much as possible. Be prepared to defend design against changes and additions, once development has begun
and be prepared to explain consequences.
If changes are necessary, ensure they're adequately reflected in related schedule changes. Use prototypes early on so customers' expectations are clarified and
customers can see what to expect; this will minimize changes later on.
Communicate. Require walkthroughs and inspections when appropriate; make extensive use of e-mail, networked bug-tracking tools, tools of change
management. Ensure documentation is available and up-to-date. Use documentation that is electronic, not paper. Promote teamwork and cooperation
What if the application has functionality that wasn't in the requirements?
It can take a serious effort to determine if an application has significant unexpected or hidden functionality, which can indicate deeper problems in the software
development process.
If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken
into account by the designer or the customer.
If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any
significant added risks as a result of the unexpected functionality. If the unexpected functionality only affects areas, e.g. minor improvements in user interface, then
it may not be a significant risk.

How can software QA processes be implemented without stifling productivity?


When you implement software QA processes without stifling productivity, you want to implement yhem slowly over time. You want to use consensus to reach
agreement on processes, and adjust, and experiment, as an organization grows and matures.
Productivity will be improved instead of stifled. Problem prevention will lessen the need for problem detection. Panics and burnout will decrease, and there will be
improved focus, and less wasted effort.
At the same time, attempts should be made to keep processes simple and efficient, minimize paperwork, promote computer-based processes and automated
tracking and reporting, minimize time required in meetings and promote training as part of the QA process.
However, no one, especially not the talented technical types like bureaucracy, and in the short run things may slow down a bit. A typical scenario would be that
more days of planning and development will be needed, but less time will be required for late-night bug fixing and calming of irate customers.

Should I take a course in manual testing?


Yes, you want to consider taking a course in manual testing. Why? Because learning how to perform manual testing is an important part of one's education. Unless
you have a significant personal reason for not taking a course, you do not want to skip an important part of an academic program.

To learn to use WinRunner, should I sign up for a course at a nearby educational institution?
Free, or inexpensive, education is often provided on the job, by an employer, while one is getting paid to do a job that requires the use of WinRunner and many
other software testing tools.
In lieu of a job, it is often a good idea to sign up for courses at nearby educational institutes. Classes, especially non-degree courses in community colleges, tend
to be inexpensive.
Black-box Testing
-Functional testing based on requirements with no knowledge of the internal program structure or data. Also known as closed-box testing.
-Black box testing indicates whether or not a program meets required specifications by spotting faults of omission -- places where the specification is not fulfilled.
-Black-box testing relies on the specification of the system or the component that is being tested to derive test cases. The system is a black-box whose behavior
can only be determined by studying its inputs and the related outputs

Affinity Diagram
A group process that takes large amounts of language data, such as a list developed by brainstorming, and divides it into categories.

Brainstorming
A group process for generating creative and diverse ideas.

Branch Coverage Testing


A test method satisfying coverage criteria that requires each decision point at each possible branch to be executed at least once.

Cause-and-Effect (Fishbone) Diagram


A tool used to identify possible causes of a problem by representing the relationship between some effect and its possible cause.
Cause-effect Graphing
A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases. It has a
beneficial side effect in pointing out incompleteness and ambiguities in specifications.

Checksheet
A form used to record data as it is gathered.

Clear-box Testing
Another term for white-box testing. Structural testing is sometimes referred to as clear-box testing, since “white boxes” are considered opaque and do not really
permit visibility into the code. This is also known as glass-box or open-box testing.

Client
The end user that pays for the product received, and receives the benefit from the use of the product.

Control Chart
A statistical method for distinguishing between common and special cause variation exhibited by processes.
Unit Testing
The testing done to show whether a unit (the smallest piece of software that can be independently compiled or assembled, loaded, and tested) satisfies its
functional specification or its implemented structure matches the intended design structure.

User
The end user that actually uses the product received.

V- Diagram (model)
a diagram that visualizes the order of testing activities and their corresponding phases of development

Validation
The process of evaluating software to determine compliance with specified requirements.

Verification
The process of evaluating the products of a given software development activity to determine correctness and consistency with respect to the products and
standards provided as input to that activity.

Walkthrough
Usually, a step-by-step simulation of the execution of a procedure, as when walking through code, line by line, with an imagined set of inputs. The term has been
extended to the review of material that is not procedural, such as data descriptions, reference manuals, specifications, etc.

White-box Testing
1. Testing approaches that examine the program structure and derive test data from the program logic. This is also known as clear box testing, glass-box or open-
box testing. White box testing determines if program-code structure and logic is faulty. The test is accurate only if the tester knows what the program is supposed
to do. He or she can then see if the program diverges from its intended goal. White box testing does not account for errors caused by omission, and all visible
code must also be readable.

2. White box method relies on intimate knowledge of the code and a procedural design to derive the test cases. It is most widely utilized in unit testing to determine
all possible paths within a module, to execute all loops and to test all logical expressions.
Using white-box testing, the software engineer can (1) guarantee that all independent paths within a module have been exercised at least once; (2) examine all
logical decisions on their true and false sides; (3) execute all loops and test their operation at their limits; and (4) exercise internal data structures to assure their
validity (Pressman, 1997). This form of testing concentrates on the procedural detail. However, there is no automated tool or testing system for this testing method.
Therefore even for relatively small systems, exhaustive white-box testing is impossible because of all the possible path permutations.
Customer (end user)
The individual or organization, internal or external to the producing organization, that receives the product.

Cyclomatic Complexity
A measure of the number of linearly independent paths through a program module.

Data Flow Analysis


Consists of the graphical analysis of collections of (sequential) data definitions and reference patterns to determine constraints that can be placed on data values
at various points of executing the source program.

Debugging
The act of attempting to determine the cause of the symptoms of malfunctions detected by testing or by frenzied user complaints.

Defect Analysis
Using defects as data for continuous quality improvement. Defect analysis generally seeks to classify defects into categories and identify possible causes in order
to direct process improvement efforts.

Defect Density
Ratio of the number of defects to program length (a relative number).
Defect
NOTE: Operationally, it is useful to work with two definitions of a defect:
1) From the producer’s viewpoint: a product requirement that has not been met or a product attribute possessed by a product or a function performed by a product
that is not in the statement of requirements that define the product.
2) From the end user’s viewpoint: anything that causes end user dissatisfaction, whether in the statement of requirements or not.

Error
1) A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition; and
2) a mental mistake made by a programmer that may result in a program fault.

Error-based Testing
Testing where information about programming style, error-prone language constructs, and other programming knowledge is applied to select test data capable of
detecting faults, either a specified class of faults or all possible faults.
Desk Checking
A form of manual static analysis usually performed by the originator. Source code documentation, etc., is visually checked against requirements and standards.

Dynamic Analysis
The process of evaluating a program based on execution of that program. Dynamic analysis approaches rely on executing a piece of software with selected test
data.

Dynamic Testing
Verification or validation performed which executes the system’s code.

Partition Testing
This method categorizes the inputs and outputs of a class in order to test them separately. This minimizes the number of test cases that have to be designed.
To determine the different categories to test, partitioning can be broken down as follows:
- State-based partitioning - categorizes class operations based on how they change the state of a class
- Attribute-based partitioning - categorizes class operations based on attributes they use
- Category-based partitioning - categorizes class operations based on the generic function the operations perform

Evaluation
The process of examining a system or system component to determine the extent to which specified properties are present.

Execution
The process of a computer carrying out an instruction or instructions of a computer. Exhaustive Testing: Executing the program with all possible combinations of
values for program variables.

Failure
The inability of a system or system component to perform a required function within specified limits. A failure may be produced when a fault is encountered.

Failure-directed Testing
Testing based on the knowledge of the types of errors made in the past that are likely for the system under test.

Fault
A manifestation of an error in software. A fault, if encountered, may cause a failure.

Fault Tree Analysis


A form of safety analysis that assesses hardware safety to provide failure statistics and sensitivity analyses that indicate the possible effect of critical failures.

Fault-based Testing
1. Testing that employs a test data selection strategy designed to generate test data capable of demonstrating the absence of a set of pre-specified faults, typically,
frequently occurring faults.
2. This type of testing allows for designing test cases based on the client specification or the code or both. It tries to identify plausible faults (areas of design or
code that may lead to errors). For each of these faults a test case is developed to "flush" the errors out. These tests also force each line of code to be executed
Flowchart
A diagram showing the sequential steps of a process or of a workflow around a product or service. Formal Review: A technical review conducted with the end user,
including the types of reviews called for in the standards.

Function Points
A consistent measure of software size based on user requirements. Data components include inputs, outputs, etc. Environment characteristics include data
communications, performance, reusability, operational ease, etc. Weight scale: 0 = not present; 1 = minor influence, 5 = strong influence.

Heuristics Testing
Another term for failure-directed testing.

Histogram
A graphical description of individual measured values in a data set that is organized according to the frequency or relative frequency of occurrence. A histogram
illustrates the shape of the distribution of individual values in a data set along with information regarding the average and variation.

Hybrid Testing
A combination of top-down testing combined with bottom-up testing of prioritized or available components.
Incremental Analysis
Incremental analysis occurs when (partial) analysis may be performed on an incomplete product to allow early feedback on the development of that product.

Infeasible Path
Program statement sequence that can never be executed.

Inputs
Products, services, or information needed from suppliers to make a process work.

Operational Requirements
Qualitative and quantitative parameters that specify the desired operational capabilities of a system and serve as a basis for deter-mining the operational
effectiveness and suitability of a system prior to deployment
Intrusive Testing
Testing that collects timing and processing information during program execution that may change the behavior of the software from its behavior in a real
environment. Usually involves additional code embedded in the software being tested or additional processes running concurrently with software being tested on
the same platform.

Class Level Methods


As mentioned above, a class (and its operations) is the module most concentrated on in OO environments. From here it should expand to other classes and sets
of classes. Just like traditional models are tested by starting at the module first and continuing to module clusters or builds and then the whole program

Random Testing
This is one of methods used to exercise a class. It is based on developing a random test sequence that tries the minimum number of operations typical to the
behavior of the class.

Basis Path Testing


Basis path testing is a white-box technique. It allows the design and definition of a basis set of execution paths. The test cases created from the basis set allow the
program to be executed in such a way as to examine each possible path through the program by executing each statement at least once.
To be able to determine the different program paths, the engineer needs a representation of the logical flow of control. The control structure can be illustrated by a
flow graph. A flow graph can be used to represent any procedural design.
Next a metric can be used to determine the number of independent paths. It is called cyclomatic complexity and it provides the number of test cases that have to
be designed. This insures coverage of all program statements.

Control Structure Testing


Because basis path testing alone is insufficient, other techniques should be utilized.
Condition testing can be utilized to design test cases which examine the logical conditions in a program. It focuses on all conditions in the program and includes
testing of both relational expressions and arithmetic expressions.
This can be accomplished using branch testing and/or domain testing methods. Branch testing executes both true and false branches of a condition. Domain
testing utilizes values on the left-hand side of the relation by making them greater than, equal to and less then the right-hand side value. This method test both
values and the relation operators in the expression. Data flow testing method is effective for error protection because it is based on the relationship between
statements in the program according to the definition and uses of variables.
Loop testing method concentrates on validity of the loop structures.

Mutation Testing
A method to determine test set thoroughness by measuring the extent to which a test set can discriminate the program from slight variants of the program.

Non-intrusive Testing
Testing that is transparent to the software under test; i.e., testing that does not change the timing or processing characteristics of the software under test from its
behavior in a real environment. Usually involves additional hardware that collects timing or processing information and processes that information on another
platform.

Operational Testing
Testing performed by the end user on software in its normal operating environment
Metric
A measure of the extent or degree to which a product possesses and exhibits a certain quality, property, or attribute.

SOFTWARE TESTING METRICS


In general testers must rely on metrics collected in analysis, design and coding stages of the development in order to design, develop and conduct the tests
necessary. These generally serve as indicators of overall testing effort needed. High-level design metrics can also help predict the complexities associated with
integration testing and the need for specialized testing software (e.g. stubs and drivers). Cyclomatic complexity may yield modules that will require extensive
testing as those with high cyclomatic complexity are more likely to be error prone.
Metrics collected from testing, on the other hand, usually comprise of the number and type of errors, failures, bugs and defects found. These can then serve as
measures used to calculate further testing effort required. They can also be used as a management tool to determine the extensity of the project's success or
failure and the correctness of the design. In any case these should be collected, examined and stored for future needs.

OBJECT ORIENTED TESTING METRICS


Testing metrics can be grouped into two categories: encapsulation and inheritance. Encapsulation
Lack of cohesion in methods (LCOM) - The higher the value of LCOM, the more states have to be tested.
Percent public and protected (PAP) - This number indicates the percentage of class attributes that are public and thus the likelihood of side effects among classes.
Public access to data members (PAD) - This metric shows the number of classes that access other class's attributes and thus violation of encapsulation
Inheritance
Number of root classes (NOR) - A count of distinct class hierarchies.
Fan in (FIN) - FIN > 1 is an indication of multiple inheritance and should be avoided.
Number of children (NOC) and depth of the inheritance tree (DIT) - For each subclass, its superclass has to be re-tested. The above metrics (and others) are
different than those used in traditional software testing, however, metrics collected from testing should be the same (i.e. number and type of errors, performance
metrics, etc.).

Outputs
Products, services, or information supplied to meet end user needs.

Path Analysis
Program analysis performed to identify all possible paths through a program, to detect incomplete paths, or to discover portions of the program that are not on any
path.

Path Coverage Testing


A test method satisfying coverage criteria that each logical path through the program is tested. Paths through the program often are grouped into a finite set of
classes; one path from each class is tested.

Peer Reviews
A methodical examination of software work products by the producer’s peers to identify defects and areas where changes are needed.

Policy
Managerial desires and intents concerning either process (intended objectives) or products (desired attributes).

Problem
Any deviation from defined standards. Same as defect.

Test Bed
1) An environment that contains the integral hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test of a
logically or physically separate component.
2) A suite of test programs used in conducting the test of a component or system.

Procedure
The step-by-step method followed to ensure that standards are met.

Supplier
An individual or organization that supplies inputs needed to generate a product, service, or information to an end user.

Process
The work effort that produces a product. This includes efforts of people and equipment guided by policies, standards, and procedures.
ISSUES
Invariable there will be issues with software testing under both models. This is simply because both environments are dynamic and have to deal with ongoing
changes during the life cycle of the project. That means changes in specifications, analysis, design and development. All of these of course affect testing.
However, we will concentrate on possible problem areas within the testing strategies and methods. We will examine how these issues pertain to each environment.

Procedural Software Testing Issues


Software testing in the traditional sense can miss a large number of errors if used alone. That is why processes like Software Inspections and Software Quality
Assurance (SQA) have been developed. However, even testing all by itself is very time consuming and very costly. It also ties up resources that could be used
otherwise. When combined with inspections and/or SQA or when formalized, it also becomes a project of its own requiring analysis, design and implementation
and supportive communications infrastructure. With it interpersonal problems arise and need managing. On the other hand, when testing is conducted by the
developers, it will most likely be very subjective. Another problem is that developers are trained to avoid errors. As a result they may conduct tests that prove the
product is working as intended (i.e. proving there are no errors) instead of creating test cases that tend to uncover as many errors as possible.

OO Software Testing Issues


A common way of testing OO software testing-by-poking-around (Binder, 1995). In this case the developer's goal is to show that the product can do something
useful without crashing. Attempts are made to "break" the product. If and when it breaks, the errors are fixed and the product is then deemed "tested".
Testing-by-poking-around method of testing OO software is, in my opinion, as unsuccessful as random testing of procedural code or design. It leaves the finding of
errors up to a chance.
Another common problem in OO testing is the idea that since a superclass has been tested, any subclasses inheriting from it don't need to be.
This is not true because by defining a subclass we define a new context for the inherited attributes. Because of interaction between objects, we have to design test
cases to test each new context and re-test the superclass as well to ensure proper working order of those objects.
Yet another misconception in OO is that if you do proper analysis and design (using the class interface or specification), you don't need to test or you can just
perform black-box testing only.
However, function tests only try the "normal" paths or states of the class. In order to test the other paths or states, we need code instrumentation. Also it is often
difficult to exercise exception and error handling without examination of the source code.

Syntax
1) The relationship among characters or groups of characters independent of their meanings or the manner of their interpretation and use;
2) the structure of expressions in a language; and
3) the rules governing the structure of the language.

Test Specifications
The test case specifications should be developed from the test plan and are the second phase of the test development life cycle. The test specification should
explain "how" to implement the test cases described in the test plan.
Test Specification Items
Each test specification should contain the following items:
Case No.: The test case number should be a three digit identifer of the following form: c.s.t, where: c- is the chapter number, s- is the section number, and t- is the
test case number.
Title: is the title of the test.
ProgName: is the program name containing the test.
Author: is the person who wrote the test specification.
Date: is the date of the last revision to the test case.
Background: (Objectives, Assumptions, References, Success Criteria): Describes in words how to conduct the test.
Expected Error(s): Describes any errors expected
Reference(s): Lists reference documententation used to design the specification.
Data: (Tx Data, Predicted Rx Data): Describes the data flows between the Implementation Under Test (IUT) and the test engine.
Script: (Pseudo Code for Coding Tests): Pseudo code (or real code) used to conduct the test.
Example Test Specification
Test Specification
Case No. 7.6.3 Title: Invalid Sequence Number (TC)
ProgName: UTEP221 Author: B.C.G. Date: 07/06/2000
Background: (Objectives, Assumptions, References, Success Criteria)

Validate that the IUT will reject a normal flow PIU with a transmissionheader that has an invalid sequence number.
Expected Sense Code: $2001, Sequence Number Error
Reference - SNA Format and Protocols Appendix G/p. 380
Data: (Tx Data, Predicted Rx Data)
IUT
<-------- DATA FIS, OIC, DR1 SNF=20
<-------- DATA LIS, SNF=20
--------> -RSP $2001

Script: (Pseudo Code for Coding Tests)


SEND_PIU FIS, OIC, DR1, DRI SNF=20
SEND_PIU LIS, SNF=20
R_RSP $2001
Formal Technical Review
Reviews that include walkthroughs, inspection, round-robin reviews and other small group technical assessment of software. It is a planned and control meeting
attended by the analyst, programmers and people involve in the software development.

• Uncover errors in logic, function or implementation for any representation of software


• To verify that the software under review meets the requirements
• To ensure that the software has been represented according to predefined standards
• To achieve software that is developed in a uniform manner.
• To make project more manageable.
• Early discovery of software defects, so that in the development and maintenance phase the errors are substantially reduced. " Serves as a training
ground, enabling junior members to observe the different approaches in the software development phases (gives them helicopter view of what other are
doing when developing the software).
• Allows for continuity and backup of the project. This is because a number of people are become familiar with parts of the software that they might not
have otherwise seen,
• Greater cohesion between different developers.

Reluctance of implementing Software Quality Assurance


Managers are reluctant to incur the extra upfront cost
Such upfront cost are not budgeted in software development therefore management may be unprepared to fork out the money.
Avoid Red - Tape (Bureaucracy)
Red- tape means extra administrative activities that needs to be performed as SQA involves a lot of paper work. New procedures to determine that software quality
is correctly implemented needs to be developed, followed through and verified by external auditing bodies. These requirements involves a lot of administrative
paperwork.

Benefits of Software Quality Assurance to the organization


Higher reliability will result in greater customer satisfaction: as software development is essentially a business transaction between a customer and developer,
customers will naturally tend to patronize the services of the developer again if they are satisfied with the product.

Overall life cycle cost of software reduced.


As software quality is performed to ensure that software is conformance to certain requirements and standards. The maintenance cost of the software is gradually
reduced as the software requires less modification after SQA. Maintenance refers to the correction and modification of errors that may be discovered only after
implementation of the program. Hence, proper SQA procedures would identify more errors before the software gets released, therefore resulting in the overall
reduction of the life cycle cost.

Constraints of Software Quality Assurance


Difficult to institute in small organizations where available resources to perform the necessary activities are not present. A smaller organization tends not to have
the required resources like manpower, capital etc to assist in the process of SQA.

Cost not budgeted


In addition, SQA requires the expenditure of dollars that are not otherwise explicitly budgeted to software engineering and software quality. The implementation of
SQA involves immediate upfront costs, and the benefits of SQA tend to be more long-term than short-term. Hence, some organizations may be less willing to
include the cost of implementing SQA into their budget.
Cookies
Provide a simple way to identify session among a group of HTTP/HTML requests. The cookie value is often an index into a table stored in the memory of a Web
server that points to an inmemory object holding the user's records. This has many potential problems: If the user's request is routed to a different server in a
subwequent request, the session information is unknown to the server. If the user is rounted to a different server and the server is part of an application cluster,
then all the servers that could receive the user's request must have a way to synchronize the session data. Storing cookies and synchronizing sessions among
clusters of server usually requires configuration, storage space, and memory.
Data Encryption Key (DEK)
Used for encryption and decryption of message text.
Data Encryption Standard (DES)
Standardized encryption method used most on the Internet.
Datagram
A block of data that can travel from one Internet site to another without relying on an earlier exchange between the source and destination computers.
DSL (Digital subscriber line)
The DSL offers high-band width connections to small businesses and homes via regular telephone lines
DDN (Defense Data Network)
The United States Department of Defense global communications network.
DECnet
A proprietary network protocol designed by Digital Equipment Corporation.
Dedicated line
A communications line used solely for computer connections, such as T1 and T3 lines. An additional phone line solely for your modem is a dedicated line as well.
Defense Data Network (DDN)
The United States Department of Defense global communications network.
DNS (Domain Name Service)
A name service used with TCP/IP hosts. A DNS exists on numerous servers over the Internet. It is a database for finding host names and IP addresses on the
Internet and trying to figure them out.
E-mail
E-mail stands for electronic mail. Most networks support some form of email. The most popular, of course, is Internet email. E-mail allows you to send text (such as
a letter) to another person on another computer. In order to send an email, you have to know the email address of the recipient. Internet email addresses always
start with the user's account name, then the at sign (@), then the name of the computer where the user gets his or her email. You can never have spaces in email
or Web addresses. For example, my email address is: w@wdell.com
Alias
A nickname that refers to a network resource.
anonymous FTP
This is a method of bypassing security checks when you logon to an FTP site. This is done by typing "anonymous" as your user ID and your e-mail address as the
password.
Archie
A method of automatically collecting, indexing, and retrieving files from the Internet.
ATM (Asynchronous Transfer Mode)
A transfer mode that designates bandwidth using a fixed-size packet or cell. Also called a "fast packet".
Authentication
A method of identifying the user to make sure the user is who he says he is.
Bandwidth
Bandwidth is the rate at which data that can be transferred through a connection. A standard PC modem has a very low bandwidth of about 3,000 to 5,000 bytes
per second. The very high speed lines that make up the backbone of the Internet are much faster, at least 1,000,000 bytes per second! Note that bandwidth is not
exactly the same as speed. If you only want to transfer one byte, it may not get where it is going any faster with high-bandwidth than it would with low-bandwidth.
However, if you want to transfer a million bytes, then high-bandwidth will definitely help! You can think of high-bandwidth as like drinking juice with a fat straw,
whereas low bandwidth is like drinking juice with one of those thin coffee straws.
BBS (Bulletin Board System)
A computer which provides file archives, email, and announcements of interest. Users usually dial in with a terminal program to access these.
Bounce
This term refers to when you send an e-mail to a non-existent recipient and the e-mail is "bounced" back to you.
Common Gateway Interface (CGI)
The CGI is a communications protocol that Web servers use to communicate with other applications. Common Gateway Interface scripts allow Web servers to
access database (among other things); CGI applications, on the other hand, receive data from servers and return data through the CGI
Encryption
Encryption is a procedure used in cryptography to convert plain text into ciphertext to prvent any but the intended recipient from reading that data.
Firewall
A firewall is a hardware and/or software boundary that prevents unauthorized users from accessing restricted files on a network.
Finger
A finger is a UNIX command that displays information about a group or user on the Internet.
FTP (File Transfer Protocol)
FTP’s are the most widely used format to uploading and downloading files on an Internet connection. FTP’s are used so computers can share files between each
other.
Gopher
A search and retrieval tool for information used mostly for research.
Gopher
A search and retrieval tool for information used mostly for research.
HTML (Hypertext Markup Language)
HTML stands for Hypertext Markup Language. This is the standard method of publishing web documents onto the World Wide Web (WWW). HTML consists of
tags surrounded by brackets.
IP (Internet Protocol)
A packet switching protocol that is used as a network layer in the TCP/IP protocol suite.
IP Address (Internet Protocol Address)
Each computer is assigned an IP address. These are similar to phone numbers. When you attempt to connect to an IP address, you will connect to the computer
with that IP address.
IRC (Internet Relay Chat)
Internet Relay Chat, or IRC, allows users to chat on different channels over the Internet. IRC channels are preceded by a # sign and are controlled by channel
operators. Channel operators can kick people out of the channel if he or she feels necessary.
ISDN (Integrated Services Digital Network)
Integrated Services Digital Network (ISDN) combines digital network services and voice into one. Users can access digital services at 115,200 bps.
ISP (Internet Service Provider)
An organization or company that has a network with a direct link to the Internet. This is done by using a dedicated line connection, usually through a link known as
a T1 connection. Users can dial into to that network using their modem. Most ISP’s now charge a monthly fee.
Intranet
An intranet is a local area network(LAN), which may not be connected to the Internet but which has similar functions.
LAN
Local Area Network. A LAN allows users to share files between computers, send e-mail and access the Internet. Most companies use Local Area Networks so that
users can access information within or outside the LAN.
Listserv
An automated mailing list distribution system.
Mailing list
A mailing list is a list of e-mail addresses used to have messages forwarded to groups of people.
MIME (Multipurpose Internet Mail Extensions)
Multipurpose Internet Mail Extensions, or MIME, is the standard way to organize different file formats. For example, if you receive an e-mail, which is in a different
format than yours, the file will be decoded so you can read it using MIME.
Mirror site
A mirror site is usually set up due to overwhelming traffic on an existing web site. A mirror site is a site that is kept separate from the original site but contains the
same information as that site. This is an alternative to users who attempt to go to a web site but cannot due to traffic problems.
NFS (Network File System)
A Network File System allows a computer to access and use files over a network, just as if it were a local network.
NNTP (Network News Transfer Protocol)
A standard industry protocol for the inquiry, distribution, retrieval, and posting of news articles.
Network News Transfer Protocol (NNTP)
A standard industry protocol for the inquiry, distribution, retrieval, and posting of news articles.
OpenURL
The OpenURL standard is a syntax to create Web-transportable packages of metadata and/or identifiers about an information object
PING
PING, is a simple way to time or test the response of an Internet connection.
POP (Post Office Protocol)
A protocol that allows single users to read mail from a server.
PPP (Point-to-Point Protocol)
A PPP is a protocol that provides a method for sending and receiving packets over serial point-to-point links.
Protocol
A protocol is a method of communication between two devices. You can think of it as the language the devices use to communicate with each other, although it is
not the same as a programming language (by which a human programmer controls a computer). Different brands of printers, for example, each use their own
protocol (or "language") by which a computer can communicate with the printer. This is why a driver program must be written for each printer
URL rewriting
Instead of storing a cookie value in the HTTP header of a request, the URL is rewritten to include a session parameter. URL rewriting might avoid cookies but it
share the same set of potential problems just mentional above. Plus, with URL rewriting there are no static URLs in your Web-Enabled application, which often
makes caching and indexing more difficult. Finally, every Web page needs to be dynamically generated so all hyperlinks include the session parameter.
Router
A device that forwards traffic between networks.
SLIP (Serial Line Internet Protocol)
A standard protocol which is used to run TCP/IP over a serial line.
SMTP (Simple Mail Transfer Protocol)
A standard protocol used to transfer e-mail messages.
Subnet mask (Address Mask)
This is used to recognize the sections of an IP address that concur with the different parts. Also known as the "subnet mask".
T1
A connection of a host to the Internet where data is transferred at 1.544 megabits per second.
T3
A connection of a host to the Internet where data is transferred at 44.746 megabits per second.
TCP/IP (Transmission Control Protocol/Internet Protocol)
Transmission Control Protocol/Internet Protocol, or TCP/IP, is the basic communications protocol required for computers that use the Internet.
TCP/IP (Transmission Control Protocol/Internet Protocol)
Transmission Control Protocol/Internet Protocol, or TCP/IP, is the basic communications protocol required for computers that use the Internet.
Telnet
This is the standard Internet protocol to connect to remote terminals.
Token ring
A token ring is a kind of LAN that consists of computers that are wired into a ring. Each computer is constantly in direct contact with the next node in the ring. A
token, which is a type of control message, is sent from one node to another, allowing messages to be sent throughout the network. A Token Ring network cannot
communicate within itself if one ring is broken.
URL (Universal Resource Locator)
An example of a URL would be http://www.computertips.com. A Universal Resource Locator refers to the universal address of an Internet web page. A URL
consists of three things. First, it starts with letters such as http, ftp, or gopher that identify the resource type, followed by a colon and two forward slashes. Next, the
computer’s name is listed. And finally, the filename and directory of the remote resource is listed as well.
UUCP (UNIX to UNIX Copy)
A protocol that passes e-mail and news through the Internet. Originally, UUCP allowed UNIX systems to send and receive files over phone lines.
WAIS (Wide Area Information Service)
A search engine and distributed information service that allows indexed searching and natural language input.
White Pages
Databases containing postal addresses, telephone numbers, and e-mail addresses of users on the Internet.
Winsocks
Acronym for Windows Sockets. A set of standards and specifications for programmers who are programming a TCP/IP application to use in Windows.
Web Browser
A web browser is a program that you use to view web pages. The two most popular web browsers are Microsoft Internet Explorer and Netscape Navigator.
Web Page
A web page is a rich document that can contain richly formatted text, graphics, animation, sound, and much more. Some web pages are generated dynamically
(such as the results of a search). You are currently viewing a (static) web page. Every web page on the Internet has a unique address which starts with the name
of the computer that holds that page. Within a web page, words and pictures can be linked to other pages. When you activate a link, you will be taken to another
page automatically. See also: Web, Web Browser, Understanding Internet Addresses.
Web server
A Web server is a server on the Internet that holds Web documents and makes them available for viewing by remote browsers.
ASCII
ASCII (pronounced as-key) is short for American Standard Code for Information Interchange. It is a standard code that assigns a binary number to all the
alphanumeric characters (upper and lower case), all the symbols on the keyboard, and some other symbols not on the keyboard (such as the cents symbol: ¢). All
computers have been using this standard code for more than a decade, and this is how plain text is saved on a disk. This standard does not define any formatting
however (except end of line), so word processors each have their own file type that includes formatting information as well.