Software Testing Fundamentals

V Model
URS UAT planning User Acceptance Testing

SRS

System test planning

System Testing

Verification

Validation Delivery production deployment

HLD

Integration test planning

Integration Testing

LLD

Unit test planning

Unit testing

Maintenance and enhancement

Coding

Software Testing Definitions
The process of executing a program or part of a program with the intent of finding errors (Myers) Testing is the process of trying to discover every conceivable fault of weakness in a work product (Myers) The process of searching for errors (Kaner) Testing is the process of evaluating or exercising a system or system component by manual or automated means to verify that the software meets specified requirements (IEEE)

Role of a Tester Assuring that the software meets user¶s needs Software can be used with negligible risks This is achieved through  Verification  Validation .

Verification Verification  It is the process of determining whether or not the product of given phase fulfill the spec. from the previous phase  Uses reviews. and demonstrations throughout development to ensure the quality of the product of that phase. inspections. including that it meets the requirements from the previous phase ³Are we building the product right?´  .

Validation  The process of evaluating the software at the end of development to ensure compliance with the specified requirements Includes what is commonly thought of as testing and comparing test results to expected results. ³Are we building the right product?´   . Validation occurs at the end of the development process.

Static & Dynamic Testing Most of the Verification and Validation activities can be classified as Static or Dynamic Static testing (without executing any program)  Requirement reviews  Design reviews  Code reviews Dynamic testing  Testing the software by executing the program .

Characteristics of Static Testing Static  Do not observe system behavior  Not looking for system failures  Faults are directly detected  Focus is on evaluating adherence to  Standards.  Guidelines and  Processes .

Characteristics of Dynamic Testing contd. Dynamic Testing  The program is executed  System behavior is observed  Determine the existence of failures  Reveals the presence of faults .

clear box and open box testing .White Box Testing (Code based testing) A software testing technique whereby explicit knowledge of the internal workings of the item being tested white box testing uses specific knowledge of programming code to examine outputs Also known as glass box. structural.

Advantages of white box testing Helps to identify the following:  Adherence to coding standards  Adherence to coding guidelines  Indentation  Memory Leaks  Logical complexity of the program  Limitations of the program .

The tester does not ever examine the programming code and does not need any further knowledge of the program other than its specifications.Black Box Testing (Requirement based testing) A Software testing technique where by the expected outcome of the software is verified by providing inputs without considering how the software program arrives at those outputs. . The internal workings of the item being tested are not known by the tester in black box testing.

The test is done from the point of view of the end user. The tester does not need knowledge of any specific programming language(s). Test cases can be designed as soon as the specifications are complete. not the designer or programmer.Advantages of Black Box testing The test is unbiased because the designer and the tester are independent of each other. .

but ensures conformance to requirements Hence.Conclusions White box testing does not guarantee 100% conformance to requirements Black box testing does not concentrate on logic of the program. white box or black box are part of verification and validation activities. . both white box and black box testing is required to ensure product quality´ All types of testing. whether static or dynamic. Let us see verification and validation activities.

Verification & Validation activities Verification  Requirement reviews  Design reviews  Code reviews Validation  Unit testing  Module testing  Integration testing  System testing  Regression testing  User acceptance testing  Field testing .

Software Testing Life Cycle [STLC] .

STLC Activities Test Requirements document Test Planning Test Design Test Execution Defect Tracking .

Test Requirements Document From the software requirement specification (SRS)document. list of testable requirements are extracted and referred to as Test Requirements document. All non technical and un-testable requirements are extracted from this document. Test requirements document is the base for further activities of Testing .

Test Planning Mainly. Resources and Reporting  Types of testing and methodology  Phases of testing applicable and scope of testing in each phase  Software and hardware requirements  Identified risks and strategy for mitigating those risks  Information regarding tools used through entire testing life cycle . Test Plan addresses  Scope and objectives of testing  Schedule.

Test case is defined as  ³a set of test inputs. execution conditions.Test Design Test Design is applicable to both white box and black box testing Test design activity involves designing test cases for a given requirement (Black box testing) or for a given program (white box testing). such as to exercise a particular program path or to verify compliance with a specific requirement [IEEE] . and expected results developed for a particular objective.

Unable to test.Test Execution Test execution involves  Executing developed test cases on a piece of program developed (Code based test cases) or on the entire software application (Requirements based test cases)  The status of test case is updated during execution  Possible states include  Pass. deferred  Test execution statistics are collected and analyzed for test progress monitoring . Fail.

The defect is fixed by the development team and the fix is provided in subsequent releases. deviates from expected result written in the test case. . The defect posting.Defect Tracking When actual result obtained from the software application during testing. The fix provided for the defect is validated and if found to be working. tracking. it is termed as a ³defect´. closing the defects are done in a defect tracking tool. the test case passes and the defect closed. The test case is failed and a defect posted on the software.

SDLC Vs STLC Requirements Phase Test Requirements document Test Planning Design Phase Coding Phase Deployment Phase Test Case Design Unit Test Execution System Test Execution Defect Tracking .

Requirement Reviews .

Requirement reviews Requirement quality affects work performed in subsequent phases of the system life cycle. Requirements of poor quality  Increase cost and schedule: effort is spent during design and implementation trying to figure out what the requirements are  Decrease product quality: poor requirements cause the wrong product to be delivered or de-scoping to meet schedule or cost constraints .

especially as knowledgeable personnel leave Create disputes with the customer/client: ambiguity causes differences in expectations and contractual issues Are a major cause of project failure: all of the above . Increase maintenance effort: lack of traceability increases the effort to identify where changes are required.Requirement reviews contd.

Requirement Quality factors Cohesive Complete Consistent Feasible Independent Necessary Unambiguous Mandatory Usable Terse Testable Traceable Non redundant External observability Metadata Verifiable and validatable .

Requirement quality factors Terse Complete Consistent Cohesive Independent Non redundant Unambiguous Metadata Feasible Necessary Requirements Testable Traceable External observability Mandatory Usable .

Requirement characteristic: Cohesive Does each requirement specify only one thing? Do all parts of the requirement belong together: Do all parts of a data requirement involve the same data abstraction? Do all parts of a functional requirement involve the same functional abstraction? Do all parts of an interface requirement involve the same interface? Do all parts of a quality requirement involve the same quality factor or sub-factor? .

Requirement characteristic: Complete Is each requirement self contained with no missing information? Does each requirement contain all relevant information? For example. does the requirement include all relevant preconditions such as the relevant state of the application or component? Does each requirement need no further amplification or clarification? Does each requirement provide sufficient information to avoid ambiguity? .

then is it specified as completely and as thoroughly as is currently known? Is each identified ³requirement´ actually a single requirement and not actually multiple requirements? Is the use of conjunctions (³and´ and ³or´) restricted to preconditions and invariants? .Requirement characteristic: Complete If the requirement is not a part of the current release.

are all parts of a compound precondition or post-condition consistent? . two requirements should neither be contradictory nor describe the same concepts using different words. Are the constituent parts of each requirement internally consistent? For example.Requirement characteristic: Consistent Is each requirement externally consistent with its documented sources such as higher-level goals and requirements? Is each requirement externally consistent with all other related requirements of the same type or at the same requirements specification? For example.

staff size.g.. chemistry. etc? . expertise. and experience)? Can each requirement be implemented given the limitations of physics.Requirement characteristic: Feasible Can each requirement be implemented given the existing hardware or software technology? Can each requirement be implemented given the endeavor¶s budget? Can each requirement be implemented given the endeavor¶s schedule? Can each requirement be implemented given the endeavor¶s constraints on staffing (e.

Requirement characteristic:Independent The requirement does not rely on another requirement to be fully understood. In testing. Why retain them? These may be source requirements that must be retained. Requirements that need proxies are not independent. a parent is not satisfied until all its children are met. . Parent requirements rely on their children to be fully defined.

or locate the group of proxies defining "user friendly" for that particular project.Requirement characteristic:Independent Also. talk about. Example: "user friendly" can be used to assign. using them to structure the proxies or children improves understandability. .

g. design. implementation.. testing.. typically the customer or user organization? Is each requirement free from unnecessary constraints (e.e. architecture. a true requirement that must be met and implemented)? Is each requirement truly required by some stakeholder.Requirement characteristic: Mandatory Is each requirement essential to the success of the application or component? Is each requirement truly mandatory (i. and other technology decisions)? .

e.Requirement characteristic: Mandatory Does each requirement specify a ³what´ rather than a ³how´? Is each requirement clearly differentiated from: A ³nice to have´ item on someone¶s wish list (i. goldplating)? Constraints? ..

Requirement characteristic: Metadata Individual requirements should have metadata (i. Rationale. attributes or annotations) that characterizes them. Assumptions. Allocation. Identification. Status..e. This metadata can include (but is not limited to)  Acceptance criteria. and Tracing information . Schedule. Prioritization.

content and format).g. and/or templates? .Requirement characteristic: Verifiability Can each requirement be verified against its source? Can each requirement be verified against its associated standards (e. guidelines..

Requirement characteristic: Validatability Is it possible to ensure that each requirement is actually what the customer representatives really want and need? Is it possible to ensure that each requirement is actually user representatives really want and need? Is it possible to ensure that each requirement is actually what the marketing representatives really want and need? .

Requirement characteristic: External Observability Does each requirement only specify behavior and/or characteristics that are externally observable when treating the application or component as a black-box? Does each requirement avoid specifying any internal architecture. implementation. or testing decisions. is the requirement clearly identified as a constraint rather than as a pure requirement? . implementation. design. design. or testing decisions? If a requirement does specify one or more internal architecture.

Requirement characteristic: Testable Able to prove the object of the requirement satisfies the requirement Un-testable requirements can lead to disputes with the client. Example of an un-testable requirement  ³The system shall produce the ABC report in a timely manner´  ³The system shall be written in the approved language´ .

Survivor benefits . Retirement annuities  B.Requirement characteristic: Traceable Examine the statement ³The system shall calculate retirement annuities and survivor benefits´ Observations:  2 different requirement clubbed together  Cannot maintain distinctness while reporting  Can be decomposed as under  The system shall calculate  A.

analyst) Requirement status . its category or type Method of validation Item(s) that satisfy the requirement Source of requirement (legal citation.Requirement attributes Unique identifier Organizational information--for example. etc. what are the parents/children of the requirement.) Association with the test plan/tests(s) Requirement owners (subject matter expert. business policy.

Requirement attributes contd. Requirement change history WBS code Risk Priority Cost (estimate and actual) Degree of difficulty Metrics Justification for the requirement Cross references to other requirements or documents Comments .

provide statistics of review comments in terms of severity and category. At the end. Categorize each review comment by appropriate severity and category.Case Study I: Requirements review Review the software requirement specification (SRS) document for marketing division of ABC pharmaceuticals and provide review comments in the enclosed template. .

Design Review .

Design reviews Reviews for software design focus on data design. architectural design and procedural design. there are two types of design reviews  Preliminary design review  Design walkthrough . In general.

Preliminary design review and design walkthrough« Preliminary design review  Assesses the translation of requirements to the design of data and architecture Design walkthrough  Concentrates on the procedural correctness of algorithms as they are implemented within program modules .

Design review verifications« Do designs satisfy all specified requirements for the product? Have all relevant standards. guidelines applied or met? Are product design and processing capabilities compatible? Are safety requirements met? .

For example. performance and reliability requirements? Is the design satisfactory for all the anticipated environmental and load conditions? Are components or service elements standardized and do they provide reliability. availability and maintainability? ..Design review verifications« Do designs meet functional and operational requirements.

installation.Design review verifications« Are plans for implementing design technically feasible (in terms of purchasing. production. inspection and testing) Are the assumptions made during the design process valid? .

Case Study II: Design review Review the Design specification document requirements provided in SRS for marketing division of ABC pharmaceuticals and provide review comments in the enclosed template. Categorize each review comment by appropriate severity and category. provide statistics of review comments in terms of severity and category . At the end.

Code Reviews .

peer reviewers. and  adherence to coding standards´ . authors of code. It is an activity in which. The code is read line by line for  real or potential flaws.Introduction :Code review Code review is a phase in the computer program development process.  consistency with the overall program design. and perhaps quality assurance reviewers get together to review code.  comment quality.

locating. and fixing bugs during later stages of development or after code delivery to users .Advantages:Code review Finding and correcting errors at this stage is relatively inexpensive Code reviews tend to reduce the more expensive process of handling.

Code review smoke test The code review smoke test includes  Does the code build correctly?  Does the code execute as expected?  Has the developer tested the code for positive workflows?  As a reviewer. do you understand the code? .

Comments and coding conventions
Does the code respect project specific coding conventions? Does the source file start with an appropriate header and copyright information? Are variable declarations properly commented? Are units of numeric data properly commented? Are units of numeric data clearly stated? Are all functions, methods and classes documented? Are complex algorithms, code optimizations adequately commented? Does the code that have been commented out have an explanation? Are comments used to identify missing functionality or unresolved issue in the code?

Error handling
Are assertions used everywhere data is expected to have a valid value or range? Are errors properly handled each time a function returns? Are resources and memory released in all error paths? Are all thrown exceptions handled properly? Is the function caller notified when an error is detected? Has error handling code been tested?

Resource Leaks
Is allocated memory (non-garbage collected) freed? Are all objects (Database connections, Sockets, Files, etc.) freed even when an error occurs? Is the same object released more than once? Does the code accurately keep track of reference counting?

Thread safeness Are all global variables thread-safe? Are objects accessed by multiple threads thread-safe? Are locks released in the same order they are obtained? Is there any possible deadlock or lock contention? .

Control Structures Are loop ending conditions accurate? Is the code free of unintended infinite loops? .

Performance Do recursive functions run within a reasonable amount of stack space? Are whole objects duplicated when only references are needed? Does the code have an impact on size. or memory use? Are you using blocking system calls when performance is involved? Is the code doing busy waits instead of using synchronization mechanisms or timer events? . speed.

Functions Are function parameters explicitly verified in the code? Are arrays explicitly checked for out-of-bound indexes? Are functions returning references to objects declared on the stack? Are variables initialized before they are used? Does the code re-write functionality that could be achieved by using an existing API? .

Bug fixes Does a fix made to a function change the behavior of caller functions? Does the bug fix correct all the occurrences of the bug? .

provide statistics of review comments in terms of severity and category. Control structures. Deviation from Req. The categories can include  Comments and coding conventions. At the end. Categorize each review comment by appropriate severity and category. Functions. Deviation from design. Resource leaks.Case Study III Review the code written in C++ for marketing division of ABC pharmaceuticals and provide review comments in the enclosed template. . Error handling. Bug fixes.

White Box Testing .

clear box and open box testing .White Box Testing (Code based testing) A software testing technique whereby explicit knowledge of the internal workings of the item being tested White box testing uses specific knowledge of programming code to examine outputs Examines the internal design of the program Requires detailed knowledge about structure of the program Allows exhaustive testing of all the logical paths (i. structural. each line of code for each condition) Also known as glass box.e.

Advantages of white box testing Helps to identify the following:  Adherence to coding standards  Adherence to coding guidelines  Indentation  Memory Leaks  Buffer overflows. stacks  Logical complexity of the program  Limitations of the program .

. you have to ensure that it takes a correct path. When there is a decision.Statement coverage Statement Coverage  Each statement in the program is executed at least once  100% of the statements in the program should be executed at-least once Weakness: It is necessary but not sufficient. It is not done by statement coverage.

write one test case for true and one test case for false . check how many decisions are there.Branch/Decision Coverage Statement coverage does not address all outcomes of decisions..Else. Branches like If. the test cases are: A=6 and B=4 «True (Here. A is false and B is false) That is. For each decision.. each branch direction must be traversed at-least once Ex: For the condition (A>=5) or (B<2) THEN X=1. Do. A is true and B is false) A=2 and B=3 « False (Here.While are to be evaluated for both true and false Test each condition for a true and a false value That is.

A is true and B is False) A=2 and B=1 « True (Here. Do not look for combinations. A is false and B is true) .Conditions Coverage All the conditions should be executed at least once for both false and true conditions. Example: For the condition (A>=5) or (B<2) THEN X=1. True and false outcome of each condition in a decision must be tested. the test cases are: A=6 and B=3 «True (Here.

go in for decision +condition coverage. B=3 .Condition/Decision coverage Condition/Decision Coverage  It may not always result in decision coverage. B=1  A=2. B=3  A=2. In such cases. B=6  A=6. For Example: For the condition (A>=5) or (B<2) THEN X=1. the test cases are:  A=6. Multiple Condition Coverage:  Go for combinations.

100% path coverage is impossible . Number of paths may be infinite if there are loops. More general coverage requires executing all possible paths.Path Coverage Errors are sometimes revealed in a path including combination of branches. known as path coverage criteria.

White box testing steps Examine the program logic Design test cases to satisfy logic coverage criteria Run the test cases Compare the actual results obtained with expected results in the test case Report errors in case of deviation from expected results Compare actual coverage to expected coverage .

Cyclomatic Complexity Cyclomatic complexity provides quantitative measure of logical complexity of the program Cyclomatic complexity provides minimum number of independent paths in the given program Based on the Cyclomatic complexity value obtained. the decision to accept the program for testing or not. can be made .

Black Box Testing (Requirement Based Testing)

Software Testing Phases

Software Testing Phases
Unit Testing Module Testing Integration Testing System Testing User Acceptance Testing Field Testing

Test Case Design Techniques .

Client Server Application Testing .

Web Based Application Testing .

Introduction to web applications Web Technology Web Architecture HTML/DHTML Web servers Cookies Types of testing applicable to web applications .

Applicable types of testing Unit testing Page flow testing Usability testing Functional testing Load testing Performance testing Data volume testing Security testing Regression testing External testing Connectivity testing Stress testing .

.e..0. control characters in strings etc.)  Alternate Input Format (e..00000001 instead of 0 etc. out of bound input.g. entering an integer when float expected. 0. and vice versa. 0 instead of 0. does the application behave correctly and consistently given either good or bad input Some of the types of checking would include:  Invalid input (Missing output.Unit Testing Unit testing involves testing of the individual modules and pages that make up the application In general.) . unit tests check the behavior of a given page i.

 ..  Immediate reload after button click prior to response having been received.Unit Testing Button click testing e. multiple clicking with and without pauses between clicks.  This testing involves a user randomly pressing buttons (including multiple clicks on "hrefs") and randomly picking checkboxes and selecting them.  Multiple reloads in the same manner as above.g. Random input and random click testing.

Unit Testing There are two forms of output screen expected:  An error page indicating the type of error encountered.  A normal page showing either the results of the operation or the normal next page where more options may be selected. ³In no event should a catastrophic error occur´ .

and if the referring page was not one of that set. A page flow diagram is a very useful aid for the tester to use when checking for correct page flow within the application. then an error page should be displayed.Page Flow Testing Page flow testing deals with ensuring that jumping to random pages does not confuse the application. . Each page should typically check to ensure that it can only be viewed via specific previous pages.

 The application must resist. Some simple checks to consider are  Forcing the application to move in an unnatural path. and display appropriate error message .Impact of page flow on security Some aspects of page flow testing cross into security.

Page flow testing : Details  Log into the system and then attempt to jump to any page in any order once a session has been established. Use bookmarks and set up temporary web pages to redirect into the middle of an application using faked session information  .

response time. graphics.Usability testing Usability testing ensures that all pages present a cohesive look to the user. page size. dithering. including spelling. etc Examples of usability testing include:  Spelling checks  Graphical user interface checks (colors.)  Adherence to web GUI Standards  Meaningful error messages  Accuracy of data displayed .. aliasing. size. etc.

Usability testing contd.g.        Page Navigation Context sensitivity Editorial continuity Accessibility Accuracy of data in the database as a result of user input Accuracy of data in the database as a result of external factors (e. imported data) Meaningful help pages including context sensitive help .

Functional Testing Functional testing ensures  Conformance to functional requirements of the application  Scenarios/Test cases are designed to find out conformance to the requirements  Whole business logic gets tested as part of the functional testing .

within the back end database and also the load ramping or surges in activity against the application .  the databases supporting the web server and  the middle ware/application server logic connecting those pages to the databases Load testing includes verification of data integrity on the web pages.Load Testing Load testing the application involves generation of varying loads (in terms of concurrent users) against  web server.

orphaned processes. run 50 users.g. should be seen).  This test should run for 48 hours.. "Is the site's response time deterministic. etc. etc. lost users. then surge to 500 users and then return to 50. .  Surge test (e.Load Testing "Does the site scale".  The system should continue running with multiple surges at various times during the day.  Sustained high load test (300+ users for 12 hours). Examples of load testing would include:  Sustained low load test (50 users for around 48 hours). no memory leaks.

Load testing is also to discover at what load the application would fail and what are the saturation point. .Load Testing contd.

web server.Performance Testing Performance Testing refers to the response time by the software to process and present the requests made by the end users Performance depends on  Speed of the network  Hardware configuration of application server. database server and the client system (Processor. RAM etc)  Volume of data in the database .

Performance of the application should be monitored during this testing.g. large amount of data in text boxes). where large quantities of data are passed through the system. large number of items in dropdown/combo boxes. since a slow database could significantly affect response time and data must be collected over this. (e.Data Volume Testing Data volume testing involves testing the application under data load. .

Data Volume Testing This data can be used as a control set for contrasting monitoring data from a live system and providing predictive information indicating when major application stress points may be encountered. No errors should be seen on application pages or in error logs for pages that are data intensive. .

but explaining it is beyond the scope of this document) .  Attempt to cause things like buffer overflow to result in root access being given accidentally.Security Testing Security testing involves verifying weather both the servers and the application are managing security correctly Security from server perspective  Attempt to penetrate system security both internally and externally to ensure the system that houses the application is secure from bother internal and external attacks. (such code does exist.

Security Testing contd.    Attempt to cause the application to crash by giving it false or random information Ensure that the server OS is up to correct patch levels from security viewpoint Ensure that the server is physically secure .

Security Testing contd. Application level security testing involves testing some or all the following  Unauthenticated access to the application  Unauthorized access to the application  Unencrypted data passing  Protection of the data  Log files should be checked to ensure they do not contain sensitive information .

Sessions information must be valid and secure.  Faked sessions.Security Testing contd.g. a URL containing a session identifier cannot be copied from one system to another and then the application be continued from the different system without being detected) Multiple login testing by a single user from several clients  . (e.

. examine log files. e.g. no sensitive information should be left in raw text/human readable form in any log file Automatic logout after N minutes of inactivity with positive feedback to the user  .  Attempt to break into the application by running username/password checks using password-cracking program  Security audit.Security Testing contd. etc.

Regression Testing Regression testing ensures that during the lifetime of the application. so must the tests . As the application evolves. any fixes do not break other parts of the application This type of testing typically involves running all the tests. or a relevant subset of those tests when defect fixes are made or new functionalities added The regression tests must also be kept up to date with planned changes in the application.

and report an error accordingly). the database server.. the browser. Example of external factors would be the web server. etc.g. simulate loss of database connectivity. The application should be able to recover without human intervention when the database returns ..External Testing External testing deals with checking the effect of external factors on the application. network connectivity issues.g. the application should detect this. Examples of external testing are:  Database unavailability test (e. is login or further access to the application permitted should the database go into a scheduled maintenance window)  Database error detection and recovery test (e.

etc. Browser compatibility tests ± for example..External Testing     Database authentication test (check access privileges to the database). does the JavaScript work the same way. Connection pooling test (ensure that database connections are used sparingly. and will not run out under load). . does the application behave the same way on multiple browsers. Web page authentication test.

Connectivity Testing Connectivity testing involves determining if the servers and clients behave appropriately under varying circumstances This testing is difficult to accomplish from a server perspective since it is expected that the servers will be operating with standby power supplies as well as being in a highly available configuration Thus the server tests need not be run using a power±off scenario. simply removing the network connection to the PC may be sufficient .

Connectivity Testing contd. Two aspects of connectivity testing  Voluntary. where a user actively interacts with the system in an unexpected way  Involuntary. where the system acts in an unpredictable manner .

Connectivity Testing: Involuntary Test:  Forcing the browser to prematurely terminate during a page load using a task manager to kill the browser. or hitting the ESC key and reloading or revisiting the same page via a bookmark. In the latter case the user should not be able to connect back to the application without being redirected to the login page. . Expectation:  The testing should cover both a small delay (< 10secs) in reinstating the browser as well as a long delay (> 10mins).

and the second delay around 15 minutes before reconnecting. the first should be under 15 seconds.  Removing the network cable from the PC.  The test should use two time delays. attempt to visit a page.  After reconnecting. and then reconnect the cable can simulate this. attempt to reload the previous page Expectation: The user should be able to continue with the session unless a specified timeout has occurred in which case the user should be redirected to a login page.Connectivity Testing: Involuntary Test:  Simulation of Hub Failure between PC and the Web Server. . abort the visit.

as that is a separate test . This will prove the statelessness of individual pages Note:  The shutdown is only for the web server.Connectivity Testing: Involuntary Test: Web server on/off test.  Shutdown the web server. Do not attempt this with an application server. then restart the server Expectation:  The user should be able to connect back to the application without being redirected to the login page.

Connectivity Testing: Involuntary Test: Database server on/off test.  Shutdown the database server and restart it Expectation: The user should be able to connect back to the application without being redirected to the login page It may be that a single transaction needs to be redone. and the application should detect this and react accordingly .

Connectivity Testing: Involuntary Application server on/off test  Shutdown the application server and restart it  There are two possible outcomes for this depending on how session management is implemented  The first outcome is that the application redirects to an error page indicating loss of connectivity. and the user is requested to login and retry  The second outcome is the application continues normally since no session information was lost because it was held in a persistent state that transcends application server restarts .

 Client ± forced quit due to browser crashing .  Quit from session without the user saving state.Connectivity Testing: Voluntary Examples of voluntary connectivity testing include.  Client forced quit from session due to visiting another site in the middle of a session for a brief period of time.  Client ± forced quit from session due to visiting another site/application for an extended period of time.  Server ± forced quit from session due to inactivity.  Server ± forced quit from session due to server problem.  Quit from session with the user saving state.

The page may have timed redirect associated with it. The session must not be terminated by the server except in the case of a deliberate logout initiated by the user Remaining on a single page for an extended length of time. a page indicating a timed out session should be displayed. and if so. The session should be automatically terminated and the next click by the user should take the user to a page indicating why the session was terminated and the option to log back into the system should be present.Extended Session Testing Remaining in a session for an extended period of time and click items to navigate the screen. .

Extended Session Testing The following must be tested  The user's session should have been saved and may optionally be restored on re login  The user's state must reflect the last complete action the user performed  Leaving the application pages to visit another site or application and then returning to original application via a bookmark or the back button should result in a restoration of state. and the application should continue as if the person had not left .

Power Hit/Reboot/Other Cycle Testing Power Hit/Cycle testing involves determining if the servers and clients act appropriately during the recovery process  Client power off/on test  Client hub power off/on test  Client network connection removal/reinsertion test  Server power off/on test  Server Hub power off/on test  Server network connection removal/reinsertion test .

SOX for Banking softwares etc) .Standards Conformance Testing Conformance to  Web application standards  Web user interface standards and guidelines  Web Usability standards  Web Security standards  Domain specific standards (e.g. CCOW for Healthcare. HL7.

Bug Life Cycle .

Bug Life Cycle Unable to fix in Deferred current release Submitted Developer is solving the bug The bug is solved only by the developer The bug is tested by the tester and closed here. Invalid bug Terminated In work Re-work Solved Validated Reviewed If the bug is not solved The bug is reviewed and closed by mgmt .

. the bug is moved to ³Terminated´ state or ³Rejected´ state by the development team. In case the submitted bug is found to be invalid. Once the developer fixes the bug. the developer moves the status of the defect to ³Solved´ state and the fix shall be made available to the tester in the next release.Bug life cycle [Notes] The status ³Submitted´ or ³Posted´ is assigned to the defect when the tester raises the defect. The status of the bug is moved to ³In ± work´ by the developer once the developer starts working on fixing the defect.

Bug life cycle [Notes] contd.
The tester tests the fix for the bug and if found to be working fine, moves the status of the defect to ³Validated´ state, otherwise puts it back to the developer and the status of the bug is moved back to ³In work´. In case the development team is not in a position to fix the defect in the current release, the development team moves the status of the defect to ³Deferred´ state meaning it shall be taken up for fixing in the next release.

Reporting Defects

Reporting defects: Attributes
Product name/Application Name Version Module Summary Steps to reproduce Impact Database information Severity Priority Browser (IE, NN, Mozilla) Screen shots (if required and available) Reproducible (Yes, No, Sporadic) Type of bug (Performance, Functionality, User interface etc) Phase of testing (Unit,, System testing)

1.Details of the attributes Product name/Application:  Provide the name of the application being tested or select it from a list Version  Provide version of the application being tested or select it from a list..2 etc Module  Provide module of the application in which the bug occurred or select it from a list .0. Ex: version 1.

 Project Leads/Managers assign defect to different individuals based on the details of the summary. . Steps to reproduce (Description)  Provide step by step explanation of how you arrived at the defect. provides sufficient picture. when viewed.Details of the attributes contd. to which team and category this defect belongs to. Summary  Provide summary of the defect such that this summary. The development team must be able to reproduce the defect with these details.

Impact  Provide impact of the defect from the application and end user¶s perspective.  or a ported database.Details of the attributes contd.  If yes. ported from which previous release . being posted. Database information  Provide information on database as to whether  it is a new database.

but alternatives are available)  Minor (Does not block any user¶s workflows. Severity  Critical (The defect has severe impact on the end user¶s workflow)  Serious (The defect has blocked workflow(s). Trivial error) Priority  High (Needs immediate fixing)  Medium (Can be fixed with agreed time period)  Low (can be fixed at convenience) .Details of the attributes contd.

Integration testing. Phase of testing  Provide or select a phase of testing such as Unit testing. Module testing. System testing   This helps to analyze how many bugs were uncovered during a particular phase of testing and facilitates comparison of finding out defects across phases .Details of the attributes contd.

No. Yes.Details of the attributes contd. & Sporadic  Selecting Yes indicates that the defect is reproducible by following the steps specified as part of the defect.  Selecting No indicates that the defect is not reproducible in a particular given sequence.e.  Selecting ³Sporadic´ indicates the defect is reproducible by following the steps specified but the defect does not consistently appear . Reproducible  This attribute generally has 3 options i.

performance etc defects appeared in the release and gives direction to identify the bottlenecks . Performance etc.  General categories include Functionality.Details of the attributes contd. Type of bug  Provide or select the type of bug like whether defect found falls into the category of Functionality. User interface  This statistics helps to understand how many functional. Usability. Volume. Load. Security. Performance. Stress.

system crashes while posting the defect. Mozilla etc. Screenshots  Attach screenshots of error messages.Details of the attributes contd. Browser  Provide or select browser on which the software was being used when the defect occurred. Ex: Internet explorer. That facilitates the development team to understand the defect better . Netscape Navigator.

Case Study Study the following defects observed while testing a software product and re-write them in proper format and assign appropriate severity and priority to the defects. .

Thank You .

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master Your Semester with a Special Offer from Scribd & The New York Times

Cancel anytime.