You are on page 1of 41

Integration testing : As I wrote in my previous posts, Integration testing is designed to test the structure

Fault seeding and the architecture of the software and determine whether all software components

interface properly. Integration testing does not verify that the system is functionally correct, only that it performs as designed.

It is the process of identifying errors introduced by combining individual program unit tested modules. Integration Testing should not begin until all units are known to perform according to the unit specifications. It can start with testing several logical units or can incorporate all units in a single integration test.

Below are the four steps of creating integration test cases:

Step 1 - Identify Unit Interfaces: The developer of each program unit identifies and documents the unit s interfaces for the following unit operations:

- Responding to queries from terminals for information - Managing transaction data entered for processing - Obtaining, updating, or creating transactions on computer files - Passing or receiving information from other logical processing units - Sending messages to terminals - Providing the results of processing to some output device or unit

Step 2 - Reconcile Interfaces for Completeness: The information needed for the integration test template is collected for all program units in the software being tested. Whenever one unit interfaces with another, those interfaces are reconciled. For example, if program unit A transmits data to program unit B, program unit B should indicate that it has received that input from program unit A. Interfaces not reconciled are examined before integration tests are executed.

Step 3 - Create Integration Test Conditions: One or more test conditions are prepared for integrating each program unit. After the condition is created, the number of the test condition is documented in the test template.

Step 4 - Evaluate the Completeness of Integration Test Conditions: The following list of questions will help guide evaluation of the completeness of integration test conditions recorded on the integration testing template. This list can also help determine whether test conditions created for the integration process are complete. Top down Testing: In this approach testing is conducted from main module to sub module. if the sub module is not developed a temporary program called STUB is used for simulate the submodule.

Advantages:

- Advantageous if major flaws occur toward the top of the program. - Once the I/O functions are added, representation of test cases is easier. - Early skeletal Program allows demonstrations and boosts morale.

Disadvantages: - Stub modules must be produced - Stub Modules are often more complicated than they first appear to be. - Before the I/O functions are added, representation of test cases in stubs can be difficult. - Test conditions ma be impossible, or very difficult, to create. - Observation of test output is more difficult. - Allows one to think that design and testing can be overlapped. - Induces one to defer completion of the testing of certain modules.

Bottom up testing: In this approach testing is conducted from sub module to main module, if the main module is not developed a temporary program called DRIVERS is used to simulate the main module.

Advantages:

- Advantageous if major flaws occur toward the bottom of the program. - Test conditions are easier to create. - Observation of test results is easier.

Disadvantages:

- Driver Modules must be produced. - The program as an entity does not exist until the last module is added.

Stubs and Drivers

It is always a good idea to develop and test software in "pieces". But, it may seem impossible because it is hard to imagine how you can test one "piece" if the other "pieces" that it uses have not yet been developed (and vice versa).

A software application is made up of a number of Units , where output of one Unit goes as an Input of another Unit. e.g. A Sales Order Printing program takes a Sales Order as an input, which is actually an output of Sales Order Creation program.

Due to such interfaces, independent testing of a Unit becomes impossible. But that is what we want to do; we want to test a Unit in isolation! So here we use Stub and Driver.

A Driver is a piece of software that drives (invokes) the Unit being tested. A driver creates necessary Inputs required for the Unit and then invokes the Unit.

Driver passes test cases to another piece of code. Test Harness or a test driver is supporting code and data used to provide an environment for testing part of a system in isolation. It can be called as as a software module which is used to invoke a module under test and provide test inputs, control and, monitor execution, and report test results or most simplistically a line of code that calls a method and passes that method a value.

For example, if you wanted to move a fighter on the game, the driver code would bemoveFighter(Fighter, LocationX, LocationY);

This driver code would likely be called from the main method. A white-box test case would execute this driver line of code and check "fighter.getPosition()" to make sure the player is now on the expected cell

on the board.

A Unit may reference another Unit in its logic. A Stub takes place of such subordinate unit during the Unit Testing.

A Stub is a piece of software that works similar to a unit which is referenced by the Unit being tested, but it is much simpler that the actual unit. A Stub works as a Stand-in for the subordinate unit and provides the minimum required behavior for that unit. A Stub is a dummy procedure, module or unit that stands in for an unfinished portion of a system.

Four basic types of Stubs for Top-Down Testing are:

- Display a trace message - Display parameter value(s) - Return a value from a table - Return table value selected by parameter

A stub is a computer program which is used as a substitute for the body of a software module that is or will be defined elsewhere or a dummy component or object used to simulate the behavior of a real component until that component has been developed.

For example, if the movefighter method has not been written yet, a stub such as the one below might be used temporarily which moves any player to position 1.

public void moveFighter(Fighter player, int LocationX, int LocationY)

{fighter.setPosition(1);}

Ultimately, the dummy method would be completed with the proper program logic. However, developing the stub allows the programmer to call a method in the code being developed, even if the method does not yet have the desired behavior.

Programmer needs to create such Drivers and Stubs for carrying out Unit Testing.

Both the Driver and the Stub are kept at a minimum level of complexity, so that they do not induce any errors while testing the Unit in question.

Stubs and drivers are often viewed as throwaway code. However, they do not have to be thrown away: Stubs can be "filled in" to form the actual method. Drivers can become automated test cases.

Example - For Unit Testing of Sales Order Printing program, a Driver program will have the code which will create Sales Order records using hardcoded data and then call Sales Order Printing program. Suppose this printing program uses another unit which calculates Sales discounts by some complex calculations. Then call to this unit will be replaced by a Stub , which will simply return fix discount data.

Cross Browser Testing ToolsReduce Browser Compatibility Testing Effort


Sometimes, testing on various browsers becomes a challenge for software test professionals & project teams. Running the test cases on all browsers makes the testing cost very high. Specially, it becomes a challenge when we do not have expert designers in the team or when we don t have verification/validation phase at the time of screen design. This is the bad part. Now, let s see what is good.

The best thing is that there are many FREE as well as paid cross browser compatible testing tools available in the market. On top of it, you can do your job with most of the FREE tools. If you have very specific requirements, then you may need to have a paid cross browser compatible testing tool. Lets have quick look on some of the best tools: 1. IE Tab: This is one of my favourite & best tools available for free. This is basically an add-on of Firefox & Chrome. With a single mouse click from within Firefox & Chrome, you can see how the webpage will view in Internet Explorer. It is very light. 2. Microsoft Super Preview: This is a free tool offered by Microsoft. It can help you to check the webpage on various versions of Internet Explorer. you can use it to test and debug the issues in layout. You can download it for free from Microsoft website.

3. Spoon Browser Sandbox: You can use this testing tool to test the web application on almost all major browsers like Firefox, Chrome & Opera. Initially, it supports IE as well, but for the last few months, its support for IE has been reduced. 4. Browsershots: Using this free browser compatibility testing tool, you can test the application on any platform & browser combinations. So, it is most widely used tool. Due to large combination of browsers & platforms, it take long time to display results. 5. IE Tester: Using this tool, you can test the web pages on various versions of IE on various Windows platforms like Windows Vista, Windows 7 & XP. 6. BrowserCam: This is a paid browser compatibility online testing tool. Its trial usage allows you to test for 24 hours only with the screen limit of 200. 7. Cross Browser Testing: This is a perfect tool for testing the website for JavaScript, Ajax and Flash features on various browsers. It offers 1 week free trial. It is available @ http://crossbrowsertesting.com/ 8. Cloud Testing: If you want to test your applications browser compatibility on various browsers like IE, Firefox, Chrome, Opera, then this tool is for you. Apart from these tools, there are few other tools like IE NetRenderer, Browsera, Adobe Browser Lab etc. By investing some time on RnD on these tools, you can save your huge effort with excellent Quality.

Unit Testing - Best Practices & Techniques


Unit testing is a software development process in which the smallest testable parts of an application, called units, are individually and independently scrutinized for proper operation. The primary goal of unit testing is to take the smallest piece of testable software in the application, isolate it from the remainder of the code, and determine whether it behaves exactly as expected. Each unit is tested separately before integrating them into modules to test the interfaces between modules.By means of effective Unit testing large percentage of defects are identified. Unit testing is performed by developers. Each module that is developed by designers need to be tested individually to verify proper operation so that any faulty module can be fixed immediately rather than let it exist and then cause some major issue in the integration phase. Once all of the units in a program have been found to be working efficiently and without any bugs, larger components of the program can be evaluated by means of integration testing. Though Unit testing may be time consuming, tedious and requires thoroughness on the part of the development team, but in the long run it can avoid major pitfalls in the software. Benefits of unit testing:

y y y y y y y

The modular approach during Unit testing eliminates the dependency on other modules during testing. We can test parts of a project with out waiting for the other parts to be available. Designers can identify and fix problem immediately, as the modules are best known to them. This helps in fixing multiple problems simultaneously. Cost of fixing a defect identified during the early stages is less compared to that during later stage. Debugging is simplified. Structural coverage of code is higher. Unit testing is more cost effective compared to the other stages of testing

Misconceptions about Unit Testing:


y

Integration Tests will Catch all the Bugs Anyway: This is one of the common misconceptions of designers. Complexity of the issue rises while it passes through various testing cycles and then when the bug is raised during the later stages, the resolution time will be high as the scope of the bug widens. It is better to weed off the crop before it poisons the whole farm.
Programmers Dilemma: Most designers believe Unit testing is not actually required as it is time consuming. They feel they are too good programmers and their software doesnt need Unit tests. But in the real world, everyone makes mistakes. Real software systems are much more complex. Software behavior various in different environment and with different scenarios. Coding is not a one pass process. Enhancements to the code can be made only when we know the existing module is functioning as expected. It Consumes Too Much Time: Developers are often in a hurry to complete their code and integrate it. Unit Testing is most often considered a useless activity, as they feel anyways the code will be tested by QA. There is no point in having a system which works but not exactly as it is supposed to function and is to be full of bugs. Practically, such an approach to development will often result in software which will not even run. The net result is that a lot of time will be spent tracking down relatively simple bugs which are wholly contained within particular units. Individually, such bugs may be trivial, but collectively they result in an excessive period of time integrating the software to produce a system which is unlikely to be reliable. This can also lead to failure to meet the required deadlines.

Unit Testing Best Practices: y Ensure each Unit Test case is independent of each other. As the software is prone to changes during the Unit Testing due to enhancements/changes to the requirements. Hence any given behavior should be specified in one and only one test. Otherwise if you later change that behavior, youll have to change multiple tests. Test only one code at a time. It is always recommended to test each of the modules independently and not while all are chained together. Otherwise you will have lots of overlap between tests and changes to one unit may effect all other modules and cause the software to fail. Name your unit tests clearly and consistently. Ensure that your test cases are easily readable so that anyone picking up Unit test cases can execute them without any issues. Ensure the test case nomenclature is consistent throughout. Before changing a module interface or implementation, make sure that the module has test cases and that it passes its tests before changing the implementation. This way you can know that your changes didn't break anything. Always ensure the bug identified during Unit Testing is fixed before moving it to the next phase.

Unit Testing Techniques: Structural, Functional & Error based Techniques Structural Techniques: It is a White box testing technique that uses an internal perspective of the system to design test cases based on internal structure. It requires programming skills to identify all paths through the software. The tester chooses test case inputs to exercise paths through the code and determines the appropriate outputs. Major Structural techniques are: y y y y y Statement Testing: A test strategy in which each statement of a program is executed at least once.

Branch Testing: Testing in which all branches in the program source code are tested at least once. Path Testing: Testing in which all paths in the program source code are tested at least once. Condition Testing: Condition testing allows the programmer to determine the path through a program by selectively executing code based on the comparison of a value
Expression Testing: Testing in which the application is tested for different values of Regular Expression.

Functional testing techniques: These are Black box testing techniques which tests the functionality of the application. Some functionality testing techniques are: y y y y Input domain testing: This testing technique concentrates on size and type of every input object in terms of boundary value analysis and Equivalence class.

Boundary Value: Boundary value analysis is a software testing design technique in which tests are designed to include representatives of boundary values.
Syntax checking: This is a technique which is used to check the Syntax of the application. Equivalence Partitioning: This is a software testing technique that divides the input data of a software unit into partition of data from which test cases can be derived

Error based Techniques: The best person to know the defects in his code is the person who has designed it. Few of the Error based techniques are: y y Fault seeding techniques can be used so that known defects can be put into the code and tested until they are all found. Mutation Testing: This is done by mutating certain statements in your source code and checking if your test code is able to find the errors. Mutation testing is very expensive to run, especially on very large applications. Historical Test data: This technique calculates the priority of each test case using historical information from the previous executions of the test case.

Careful approach to Unit testing helps detecting many bugs at a stage of the software development where they can be corrected economically. It is a tedious process when bugs are detected and corrected at later stages of software development as fixing the bugs is difficult, time consuming and costly. Efficiency and quality are best served by testing software as early in the life cycle. Whenever any changes are made to the software we need to ensure regression testing is performed. Testing strategies like thorough unit testing, good management of the testing process, and appropriate use of tools helps in maximizing the

effectiveness of testing effort. Effective unit testing is all part of developing a very high quality software product which can benefit the organization on a whole.

How to do System Testing


Testing the software system or software application as a whole is referred to as System Testing of the software. System testing of the application is done on complete application software to evaluate software's overall compliance with the business / functional / end-user requirements. The system testing comes under black box software testing. So, the knowledge of internal design or structure or code is not required for this type of software testing. In system testing a software test professional aims to detect defects or bugs both within the interfaces and also within the software as a whole. However, the during integration testing of the application or software, the software test professional aims to detect the bugs / defects between the individual units that are integrated together. During system testing, the focus is on the software design, behavior and even the believed expectations of the customer. So, we can also refer the system testing phase of software testing as investigatory testing phase of the software development life cycle. At what stage of SDLC the System Testing comes into picture: After the integration of all components of the software being developed, the whole software system is rigorously tested to ensure that it meets the specified business, functional & nonfunctional requirements. System Testing is build on the unit testing and integration testing levels. Generally, a separate and dedicated team is responsible for system testing. And, system testing is performed on stagging server. Why system testing is required:
y y

It is the first level of software testing where the software / application is tested as a whole. It is done to verify and validate the technical, business, functional and non-functional requirements of the software. It also includes the verification & validation of software application architecture. System testing is done on stagging environment that closely resembles the production environment where the final software will be deployed.

Entry Criteria for System Testing:


y y y

Unit Testing must be completed Integration Testing must be completed Complete software system should be developed

A software testing environment that closely resembling the production environment must be available (stagging environment).

System Testing in seven steps: 1. 2. 3. 4. 5. 6. 7. Creation of System Test Plan Creation of system test cases Selection / creation of test data for system testing Software Test Automation of execution of automated test cases (if required) Execution of test cases Bug fixing and regression testing Repeat the software test cycle (if required on multiple environments)

Contents of a system test plan: The contents of a software system test plan may vary from organization to organization or project to project. It depends how we have created the software test strategy, project plan and master test plan of the project. However, the basic contents of a software system test plan should be: - Scope - Goals & Objective - Area of focus (Critical areas) - Deliverables - System testing strategy - Schedule - Entry and exit criteria - Suspension & resumption criteria for software testing - Test Environment - Assumptions - Staffing and Training Plan - Roles and Responsibilities - Glossary How to write system test cases: The system test cases are written in a similar way as we write functional test cases. However, while creating system test cases following two points needs to be kept in mind: - System test cases must cover the use cases and scenarios - They must validate the all types of requirements - technical, UI, functional, non-functional, performance etc. As per Wikipedia, there are total of 24 types of testings that needs to be considered during system testing. These are: GUI software testing, Usability testing, Performance testing, Compatibility testing, Error handling testing, Load testing, Volume testing, Stress testing, User help testing, Security testing, Scalability testing, Capacity testing, Sanity testing, Smoke testing, Exploratory testing, Ad hoc

testing, Regression testing, Reliability testing, Recovery testing, Installation testing, Idem potency testing, Maintenance testing, Recovery testing and failover testing, Accessibility testing The format of system test cases contains:
y y y y y y y y y y

Test Case ID - a unique number Test Suite Name Tester - name of tester who execute of write test cases Requirement - Requirement Id or brief description of the functionality / requirement How to Test - Steps to follow for execution of the test case Test Data - Input Data Expected Result Actual Result Pass / Fail Test Iteration

Performance testing - It is performed to evaluate the performance of components of a particular system in a specific situation. It very wide term. It includes: Load Testing, Stress Testing, capacity testing, volume testing, endurance testing, spike testing, scalability testing and reliability testing etc. This type of testing generally does not give pass or fail. It is basically done to set the benchmark & standard of the application against Concurrency / Throughput, Server response time, Latency, Render response time etc. In other words, you can say it is technical & formal evaluation for responsiveness, speed, scalability and stability characteristics.

Load Testing is subset of performance testing. It is done by constantly increasing the load on the application under test till the time it reaches the threshold limit. The main goal of load testing is to identify the upper limit of the system in terms of database, hardware and network etc. The common goal of doing the load testing is to set the SLAs for the application. Example of load testing can be: Running multiple applications on a computer simultaneously - starting with one application, then start second application, then third and so on....Now see the performance of your computer. Endurance test is also a part of load testing which used to calculate metrics like Mean Time Between Failure and Mean Time to Failure. Load Testing helps to determine:
y y y y y y

Throughput Peak Production Load Adequacy of H/W environment Load balancing requirements How many users application can handle with optimal performance results How many users hardware can handle with optimal performance results

Stress testing - It is done to evaluate the application's behaviour beyond normal or peak load conditions. It is basically testing the functionality of the application under high loads. Normally these are related to synchronization issues, memory leaks or race conditions etc. Some testing experts also call it as fatigue testing. Sometimes, it becomes difficult to set up a controlled environment before running the test. Example of Stress testing is: A banking application can take a maximum user load of 20000 concurrent users. Increase the load to 21000 and do some transaction like deposit or withdraw. As soon as you did the transaction, banking application server database will sync with ATM database server. Now check with the user load of 21000 does this sync happened successfully. Now repeat the same test with 22000 thousand concurrent users and so on. Spike test is also a part of stress testing which is performed when application is loaded with heavy loads repeatedly and increase beyond production operations for short duration. Stress Testing helps to determine:
y y y y

Errors in slowness & at peak user loads Any security loop holes with over loads How the hardware reacts with over loads Data corruption issues at over loads

I would add my opinion that stress testing has the goal of breaking the app. That's the only way to know the upper limit.

Most of our customers consider endurance testing to be heavy load running for many hours in order to find memory leaks.

Web load testing is:


y y y y y

similar to, but not synonymous with performance testing concerned with the volume of traffic your website (or application) can handle not intended to break the system viewing the system from the user perspective associated with black box testing

Web performance testing is:


y y y y y

a superset of load testing concerned with speed and efficiency of various components of the web application useful with only one user and/or one transaction viewing the system from the architecture perspective (behind the server side curtain) associated with white box testing

Bookmark/Search this post with:

Q1. What is Software Testing? Ans. Operation of a system or application under controlled conditions and evaluating the results. The controlled conditions must include both normal and abnormal conditions. It is oriented to detection. Q2. What is Software Quality Assurance? Ans. Software QA involves the monitoring and improving the entire software development process, making sure that any agreed-upon standards and procedures are followed. It is oriented to prevention. Q 3. What are the qualities of a good test engineer? Ans. A good test engineer has a test to break attitude. An ability to take the point of view of the customer a strong desire for quality Tactful and diplomatic Good communication skills Previous software development experience can be helpful as it provides a deeper understanding of the software development process Good judgment skills Q4. What are the qualities of a good QA engineer? Ans. The same qualities a good tester Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews. Q5. What are the qualities of a good QA or Test manager? Ans. Must be familiar with the software development process able to maintain enthusiasm of their team and promote a positive atmosphere always looking for preventing problems able to promote teamwork to increase productivity able to promote cooperation between software, test, and QA engineers have the skills needed to promote improvements in QA processes have the ability to say 'no' to other managers when quality is insufficient or QA processes are not being adhered have people judgement skills for hiring and keeping skilled personnel be able to run meetings and keep them focused Q6. What is the 'software life cycle'? Ans. The life cycle begins when an application is first conceived and ends when it is no longer in use. Q7. Tell us about some world famous bugs

Ans. 1. In December of 2007 an error occurred in a new ERP payroll system for a large urban school system. More than one third of employees had received incorrect paychecks that results in overpayments of $53 million. Inadequate testing reportedly contributed to the problems 2. A software error reportedly resulted in overbilling to 11,000 customers of a major telecommunications company in June of 2006. Making the corrections in the bills took a long time. 3. In March of 2002 it was reported that software bugs in Britain's national tax system resulted in more than 100,000 erroneous tax overcharges. Q8. What are the common problems in the software development process? Ans. Poor requirements Unrealistic schedule Inadequate testing A request to pile on new features after development is unnderway. Miscommunication Q9. What are the common solutions to software development problems? Ans. Solid requirements Realistic schedules Adequate testing stick to initial requirements where feasible require walkthroughs and inspections when appropriate Q10. What is a Quality Software? Ans. Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and / or expectations, and is maintainable. Q11. What is good code? Ans. Good code is code that works, is reasonably bug free, and is readable and maintainable. Q12. What is good design? Ans. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable. It should also be robust with sufficient error-handling and status logging capability and work correctly when implemented. And, good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements. Q13. What's the role of documentation in QA? Ans. QA practices must be documented to enhance their repeatability. There should be a system for easily finding and obtaining information and determining what documentation will have a particular piece of information. Q14. Which projects may not need independent test staff? Ans. It depends on the size & nature of the project. Then, it depends on business risks, development

methodology, the skills and experience of the developers. Q15. Why does software have bugs? Ans. miscommunication or no communication software complexity programming errors changing requirements time pressures poorly documented code software development tools egos - people prefer to say things like: 'no problem' 'piece of cake' 'I can whip that out in a few hours'

Q15. How QA processes can be introduced in an organization? Ans. 1. It depends on the size of the organization and the risks involved. e.g. for large organizations with high-risk projects a formalized QA process is necessary.

2. If the risk is lower, management and organizational buy-in and QA implementation may be a slower.

3. The most value for effort will often be in

- Requirements management processes - Design inspections and code inspections - post-mortems / retrospectives

Q16. What are the steps to perform software testing? Ans.

- Understand requirements and business logic - Get budget and schedule requirements - Determine required standards and processes - Set priorities, and determine scope and limitations of tests

- Determine test approaches and methods - Determine test environment, test ware, test input data requirements - Set milestones and prepare test plan document - Write test cases - Have needed reviews/inspections/approvals of test cases - Set up test environment - Execute test cases - Evaluate and report results - Bug Tracking and fixing - Retesting or regression testing if needed - Update test plans, test cases, test results, traceability matrix etc.

Q17. What is a test plan? Ans. A document that describes the objectives, scope, approach, and focus of a software testing effort.

Q18. What are the contents of test plan? Ans.

- Title and identification of software including version etc. - Revision history - Table of Contents - Purpose of document and intended audience - Objective and software product overview - Relevant related document list and standards or legal requirements - Naming conventions - Overview of software project organization - Roles and responsibilities etc. - Assumptions and dependencies - Risk analysis - Testing priorities - Scope and limitations of testing effort - Outline of testing effort and input data

- Test environment setup and configuration issues - Configuration management processes - Outline of bug tracking system - Test automation if required - Any tools to be used, including versions, patches, etc. - Project test metrics to be calculated - Testing deliverables - Reporting plan - Testing entrance and exit criteria - Sanity testing period and criteria - Test suspension and restart criteria - Personnel pre-training needs - Relevant proprietary, classified, security and licensing issues. - Open issues if any - Appendix

Q19. What is a test case? Ans. A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of a software application is working correctly.

Q20. What are the components of a bug report? Ans.

- Application name - The function, module, name - Bug ID - Bug reporting date - Status - Test case ID - Bug description - Steps needed to reproduce the bug - Names and/or descriptions of file/data/messages/etc. used in test

- Snapshot that would be helpful in finding the cause of the problem - Severity estimate - Was the bug reproducible? - Name of tester - Description of problem cause (filled by developers) - Description of fix (filled by developers) - Code section/file/module/class/method that was fixed (filled by developers) - Date of fix (filled by developers) - Date of retest or regression testing - Any remarks or comments

Q21. What is verification? Ans. It involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. It can be done with checklists, issues lists, walkthroughs, and inspection meetings etc.

Q22. What is validation? Ans. It involves actual testing and takes place after verifications are completed.

Q23. What is a walkthrough? Ans. An informal meeting for evaluation or informational purposes.

Q24. What's an inspection? Ans. It is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything.

Q25. What is configuration management? Ans. It covers the processes used to control, coordinate, and track: code, requirements, documentation, problems, change requests, designs, tools / compilers / libraries / patches, changes made to them, and who makes the changes.

Q26. When you can stop testing? Ans.

- Deadlines (release deadlines, testing deadlines, etc.) - Test cases completed with certain percentage passed - Test budget depleted - Coverage of code/functionality/requirements reaches a specified point - Bug rate falls below a certain level Beta or alpha testing period ends

Q27. What if there isn't enough time for thorough testing? Ans. Consider the following scenarios:

- Which functionality is most important from business point of view? - Which functionality is most visible to the user? - Which functionality has the largest financial impact? - Which aspects of the application are most important to the customer? - Which parts of the code are most complex? - Which parts of the application were developed in rush? - Which aspects of similar/related previous projects caused problems? - What do the developers think are the highest-risk aspects of the application? - What kinds of problems would cause the worst publicity? - What kinds of problems would cause the most customer service complaints? - What kinds of tests could easily cover multiple functionalities?

Q28. What if the project isn't big enough to justify extensive testing? Ans. Do risk analysis. See the impact of project errors, not the size of the project.

Q29. How can web based applications be tested? Ans. Apart from functionality consider the following:

- What are the expected loads on the server and what kind of performance is expected on the client side?

- Who is the target audience? - Will down time for server and content maintenance / upgrades be allowed? - What kinds of security will be required and what is it expected to do? - How reliable are the site's Internet / intranet connections required to be? - How do the internet / intranet affect backup system or redundant connection requirements and testing? - What variations will be allowed for targeted browsers? - Will there be any standards or requirements for page appearance and / or graphics throughout a site or parts of a site? - How will internal and external links be validated and updated? - How are browser caching and variations in browser option settings? - How are flash, applets, java scripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested? - From the usability point of view consider the following:

-- Pages should be 3-5 screens longer. -- The page layouts and design elements should be consistent throughout the application / web site. --Pages should be as browser-independent or generate based on the browser-type. --There should be no dead-end pages. A link to a contact person or organization should be included on each page.

Q30. What is Extreme Programming? Ans. Extreme Programming is a software development approach for risk-prone projects with unstable requirements. Unit testing is a core aspect of Extreme Programming. Programmers write unit and functional test code first - before writing the application code. Generally, customers are expected to be an integral part of the project team and to help create / design scenarios for acceptance testing.

Smoke Testing: Software Testing done to ensure that whether the build can be accepted for through software testing or not. Basically, it is done to check the stability of the build received for software testing. Sanity testing: After receiving a build with minor changes in the code or functionality, a subset of regression test cases are executed that to check whether it rectified the software bugs or issues

and no other software bug is introduced by the changes. Sometimes, when multiple cycles of regression testing are executed, sanity testing of the software can be done at later cycles after through regression test cycles. If we are moving a build from staging / testing server to production server, sanity testing of the software application can be done to check that whether the build is sane enough to move to further at production server or not. Difference between Smoke & Sanity Software Testing:
y

y y

Smoke testing is a wide approach where all areas of the software application are tested without getting into too deep. However, a sanity software testing is a narrow regression testing with a focus on one or a small set of areas of functionality of the software application. The test cases for smoke testing of the software can be either manual or automated. However, a sanity test is generally without test scripts or test cases. Smoke testing is done to ensure whether the main functions of the software application are working or not. During smoke testing of the software, we do not go into finer details. However, sanity testing is a cursory software testing type. It is done whenever a quick round of software testing can prove that the software application is functioning according to business / functional requirements. Smoke testing of the software application is done to check whether the build can be accepted for through software testing. Sanity testing of the software is to ensure whether the requirements are met or not.

Identifing Manual / Automated Test Types The types of tests that need to be designed and executed depend totally on the objectives of the application, i.e., the measurable end state the organization strives to achieve. For example, if the application is a financial application used by a large number of individuals, special security and usability tests need to be performed. However, three types of tests which are nearly always required are: function, user interface, and regression testing. Function testing comprises the majority of the testing effort and is concerned with verifying that the functions work properly. It is a black-box-oriented activity in which the tester is completely unconcerned with the internal behavior and structure of the application. User interface testing, or GUI testing, checks the user s interaction or functional window structure. It ensures that object state dependencies function properly and provide useful navigation through the functions. Regression testing tests the application in light of changes made during debugging, maintenance, or the development of a new release. Other types of tests that need to be considered include system and acceptance testing. System testing is the highest level of testing which evaluates the functionality as a total system, its performance and overall fitness of use. Acceptance testing is an optional user-run test which demonstrates the ability of the application to meet the user s requirements. This test may or may not be performed based on the formality of the project. Sometimes the system test suffices. Finally, the tests that can be automated with a testing tool need to be identified. Automated tests provide three benefits: repeatability, leverage, and increased functionality. Repeatability enables automated tests to be executed more than once, consistently. Leverage comes from repeatability from tests previously captured and tests that can be programmed with the tool, which may not have been

possible without automation. As applications evolve, more and more functionality is added. With automation, the functional coverage is maintained with the test library. Identifing the Test Exit Criteria: One of the most difficult and political problems is deciding when to stop testing, since it is impossible to know when all the defects have been detected. There are at least four criteria for exiting testing: Scheduled testing time has expired: This criteria is very weak, since it has nothing to do with verifying the quality of the application. This does not take into account that there may be an inadequate number of test cases or the fact that there may not be any more defects that are easily detectable. Some predefined number of defects discovered: The problems with this is knowing the number of errors to detect and also overestimating the number of defects. If the number of defects is underestimated, testing will be incomplete. Potential solutions include experience with similar applications developed by the same development team, predictive models, and industry-wide averages. If the number of defects is overestimated, the test may never be completed within a reasonable time frame. A possible solution is to estimate completion time, plotting defects detected per unit of time. If the rate of defect detection is decreasing dramatically, there may be burnout, an indication that a majority of the defects have been discovered. All the formal tests execute without detecting any defects: A major problem with this is that the tester is not motivated to design destructive test cases that force the tested program to its design limits, e.g., the tester s job is completed when the test program fields no more errors. The tester is motivated not to find errors and may subconsciously write test cases that show the program is error free. This criteria is only valid if there is a rigorous and totally comprehensive test case suite created which approaches 100% coverage. The problem with this is determining when there is a comprehensive suite of test cases. If it is felt that this is the case, a good strategy at this point is to continue with ad hoc testing. Ad hoc testing is a black-box testing technique in which the tester lets his or her mind run freely to enumerate as many test conditions as possible. Experience has shown that this technique can be a very powerful supplemental or add-on technique. Combination of the above: Most testing projects utilize a combination of the above exit criteria. It is recommended that all the tests be executed, but any further ad hoc testing will be constrained by time.

Some Facts in Software Testing


- Software Testing can not show the absence of errors - There is no last bug in the software / application - Maximum coverage through minimum test cases is a big challenge of testing. - Even for simple program of loops, there can be over million paths testable manually in million years. Domain of possible inputs is too large to test. Also, there are too many possible

paths to test the program. Even if, theoretically speaking, one could design excellent test cases to detect all defects, does one have the luxury of time and resources to do so in practice? - Preventive testing is very important. Verify documents, design, code at each stage of development to prevent errors from getting in. Use a variety of techniques for this. Code review itself throughs up many defects that may be too difficult to detect by execution of tests. On the other hand, test execution can detect errors that can not be easily detected by code reviews. Therefore, various techniques are complimentary in nature and it is only through their combined use that one can hope to detect most errors. - Even though development tends to be given more importance than testing. A good test design is perhaps intellectually more challenging than the software design and development. Given reasonable practical limits to the quality of test design in practice, it is easy to understand that why it is difficult to uncover all defects through testing.

Test Readiness Review Checklist


Before starting the actual testing, it is important to check whether the system / project / environment is ready for testing. This is called as Test Readiness Review. It is better to do it with a checklist. Below is a sample Test Readiness Review Checklist: 1. Whether all the tests are conducted according to the Test Plan / Cases ? 2. Are all problems / defects translated into Defect Reports ? 3. Are all the Defect Reports satisfactorily resolved ? 4. Is the log of the tests conducted available? 5. Is unit testing complete in all respects? 6. Is Integration testing complete ? 7. Is all the relevant documentation baselined ? 8. Is all work products products baselined? 9. Is the test plan baselined ? 10. Does the test plan contain the strategy / procedure to conduct the system test ? 11. Are baselined test designs and test cases ready?

12. Is unit/integrated test software ready ? 13. Is the user manual ready? 14. Is the installation procedure documented ? 15. Are all the product requirements implemented? 16. Is the list of known problems available? Is there any "workaround" procedure for the known bugs ? 17. Are test environment needs met for Hardware, code, procedures, scripts, test tools etc.? 18. List of exceptions in test software and test procedures and their work around if any? 19. Is the test reporting tool is available? 20. Are the designers educated on Test reporting tool? 21. Is any standard methodology / tool used and is appropriate to the type of the project? 22. Is the criteria for regression testing defined? Has the regression testing been done accordingly? 23. Is the source code available from the client for performing regression testing complete in all respects? 24. Is the source code freezed for testing?

How to do Cookies Testing


Below is a list of major scenarios for cookies testing of a website. Multiple test cases can be generated from these scenarios by performing various combinations. 1. Check if the application is writing cookies properly or not. 2. Test to make sure that no personal or sensitive data is stored in the cookie. If it is there in cookies, it should be in encrypted format. 3. If the application under test is a public website, there should not be overuse of cookies. It may result in loss of website traffic if browser is prompting for cookies more often. 4. Close all browsers, delete all previously written cookies and disable the cookies from your browser settings. Navigate or use that part of web site which use cookies. It should display appropriate messages like "For smooth functioning of this site please enable

cookies on your browser." 5. Set browser options to prompt whenever cookie is being stored / saved in your system. Navigate or use that part of web site which use cookies. It will prompt and ask if you want to accept or reject the cookie. Application under test should display an appropriate message if you reject the cookies. Also, check that if pages are getting crashed or data is getting corrupted. 6. Close all browsers windows and manually delete all cookies. Navigate various web pages and check and see if these web pages show unexpected behavior. 7. Edit few cookies manually in notepad or some other editor. Make modifications like alter the cookie content, name of the cookie, change expiry date etc. Now, test the site functionality. Corrupted cookies should not allow to read the data inside it. 8. Cookies written by one web site should not be accessible by other website. 9. If you are testing an online shopping portal, Check if reaching to your final order summary page deletes the cookie of previous page of shopping cart properly and no invalid action or purchase got executed from same logged in user. 10. Check if the application under test is writing the cookies properly on different browsers as intended and site works properly using these cookies. This test can be done on browsers like different versions of internet explorer, Mozilla Firefox, Netscape, Opera etc. 11. If the application under test is using cookies to maintain the logging state for users. Check if some id is being displayed in the address bar. Now, change the id & press enter. It should display an access denied message and and you should not be able to see other user's account.

Software Test Planning


The purpose of test planning is to provide the basis for accomplishing testing in an organized manner. From a managerial point of view it is the most important document, because it helps manage the test project. If a test plan is comprehensive and carefully thought out, test execution and analysis should proceed smoothly. The test plan is an ongoing document, particularly in the spiral environment since the system is constantly changing. As the system changes, so does it. A good test plan is one which: Has a good chance of detecting a majority of the defects Provides test coverage for most of the code Is flexible Is executed easily, repeatably, and automatically Defines the types of tests to be performed Clearly documents the expected results Provides for defect reconciliation when a defect is discovered Clearly defines the test objectives Clarifies the test strategy Clearly defines the test exit criteria

Is not redundant Identifies the risks Documents the test requirements Defines the test deliverables The planning test methodology includes three steps: 1. Building the test plan 2. defining the metrics 3. reviewing/approving the test plan. Step 1: Build a Test Plan - Prepare an Introduction - Define the High-Level Functional Requirements (In Scope) - Identify Manual / Automated Test Types - Identify the Test Exit Criteria - Establish Regression Test Strategy - Define the Test Deliverables - Organize the Test Team - Establish a Test Environment - Define the Dependencies - Create a Test Schedule - Select the Test Tools - Establish Defect Recording / Tracking Procedures - Establish Change Request Procedures - Establish Version Control Procedures - Define Configuration Build Procedures - Define Project Issue Resolution Procedures - Establish Reporting Procedures - Define Approval Procedures Step 2: Define the Metric Objectives - Define the Metrics - Define the Metric Points Step 3: Review/Approve the Plan - Schedule / Conduct the Review - Obtain Approvals

Software Testing Techniques and Levels


In this post, I'm going to describe techniques and strategies for software testing. Techniques cover different ways testing can be accomplished. Testing techniques can be defined in three

ways: Preparation, Execution and Approach. Preparation: From preparation point of view there are two testing techniques: Formal Testing and Informal Testing. Formal Testing: Testing performed with a plan, documented set of test cases, etc that outline the methodology and test objectives. Test documentation can be developed from requirements, design, equivalence partitioning, domain coverage, error guessing, etc. The level of formality and thoroughness of test cases will depend upon the needs of the project. Some projects can have rather informal formal test cases, while others will require a highly refined test process. Some projects will require light testing of nominal paths while others will need rigorous testing of exceptional cases. Informal Testing: Ad hoc testing performed without a documented set of objectives or plans. Informal testing relies on the intuition and skills of the individual performing the testing. Experienced engineers can be productive in this mode by mentally performing test cases for the scenarios being exercised. From the execution point of view, the two testings types are: Manual Testing and Automated Testing. Manual Testing: Manual testing involves direct human interaction to exercise software functionality and note behavior and deviations from expected behavior. Automated Testing: Testing that relies on a tool, built-in test harness, test framework, or other automatic mechanism to exercise software functionality, record output, and possibly detect deviations. The test cases performed by automated testing are usually defined as software code or script that drives the automatic execution. From the testing approach point of view, the two testings types are: Structural Testing and Functional Testing. Structural Testing: Structural testing depends upon knowledge of the internal structure of the software. Structural testing is also referred to as white-box testing. Data-flow Coverage: Data-flow coverage tests paths from the definition of a variable to its use. Control-flow Coverage

Statement Coverage: Statement coverage requires that every statement in the code under test has been executed. Branch Coverage: Branch coverage requires that every point of entry and exit in the program

has been executed at least once, and every decision in the program has taken all possible outcomes at least one. Condition Coverage: Condition coverage is branch coverage with the additional requirement that every condition in a decision in the program has taken all possible outcomes at least once. Multiple condition coverage requires that all possible combinations of the possible outcomes of each condition have been tested. Modified condition coverage requires that each condition has been tested independently. Functional Testing: Functional testing compares the behavior of the test item to its specification without knowledge of the items internal structure. Functional testing is also referred to as black box testing. Requirements Coverage: Requirements coverage requires at least one test case for each specified requirement. A traceability matrix can be used to insure that requirements coverage has been satisfied. Input Domain Coverage: Input domain coverage executes a function with a sufficient set of input values from the functions input domain. The notion of a sufficient set is not completely definable, and complete coverage of the input domain is typically impossible. Therefore the input domain is broken into subsets, or equivalence classes, such that all values within a subset are likely to reveal the same defects. Any one value within an equivalence class can be used to represent the whole equivalence class. In addition to a generic representative, each extreme value within an equivalence class should be covered by a test case. Testing the extreme values of the equivalence classes is referred to as boundary value testing. Output Domain Coverage: Output domain coverage executes a function in such a way that a sufficient set of output values from the functions output domain is produced. Equivalence classes and boundary values are used to provide coverage of the output domain. A set of test cases that reach the boundary values and a typical value for each equivalence class is considered to have achieved output domain coverage. Various Software Testing Levels Although many testing levels tend to be combined with certain techniques, there are no hard and fast rules. Some types of testing imply certain lifecycle stages, software deliverables, or other project context. Other types of testing are general enough to be done almost any time on any part of the system. Some require a particular methodology. When appropriate common utilizations of a particular testing type will be described. The projects test plan will normally define the types of testing that will be used on the project, when they will be used, and the strategies they will be used with. Test cases are then created for each testing type. Unit Testing: A unit is an abstract term for the smallest thing that can be conveniently tested. This will vary based on the nature of a project and its technology but usually focuses at the subroutine level. Unit testing is the testing of these units. Unit testing is often automated and may require creation of a harness, stubs, or drivers.

Component Testing: A component is an aggregate of one or more components. Component testing expands unit testing to include called components and data types. Component testing is often automated and may require creation of harness, stubs, or drivers. Single Step Testing: Single step testing is performed by stepping through new or modified statements of code with a debugger. Single step testing is normally manual and informal. Bench Testing: Bench testing is functional testing of a component after the system has been built in a local environment. Bench testing is often manual and informal. Developer Integration Testing: Developer integration testing is functional testing of a component after the component has been released and the system has been deployed in a standard testing environment. Special attention is given to the flow of data between the new component and the rest of the system. Smoke Testing: Smoke testing determines whether the system is sufficiently stable and functional to warrant the cost of further, more rigorous testing. Smoke testing may also communicate the general disposition of the current code base to the project team. Specific standards for the scope or format of smoke test cases and for their success criteria may vary widely among projects. Feature Testing: Feature testing is functional testing directed at a specific feature of the system. The feature is tested for correctness and proper integration into the system. Feature testing occurs after all components of a feature have been completed and released by development. Integration Testing: Integration testing focuses on verifying the functionality and stability of the overall system when it is integrated with external systems, subsystems, third party components, or other external interfaces. System Testing: System testing occurs when all necessary components have been released internally and the system has been deployed onto a standard environment. System testing is concerned with the behavior of the whole system. When appropriate, system testing encompasses all external software, hardware, operating environments, etc. that will make up the final system. Release Testing: Release tests ensure that interim builds can successfully deployed by the customer. This includes product deployment, installation, and a pass through the primary functionality. This test is done immediately before releasing to the customer. Beta Testing: Beta testing consists of deploying the system to many external users who have agreed to provide feedback about the system. Beta testing may also provide the opportunity to explore release and deployment issues. Acceptance Testing: Acceptance testing compares the system to a predefined set of acceptance

criteria. If the acceptance criteria are satisfied by the system, the customer will accept delivery of the system. Regression Testing: Exercises functionality that has stabilized. Once high confidence has been established for certain parts of the system, it is generally wasted effort to continue rigorous, detailed testing of those parts. However, it is possible that continued evolution of the system will have negative effects on previously stable and reliable parts of the system. Regression testing offers a low-cost method of detecting such side effects. Regression testing is often automated and focused on critical functionality. Performance Testing: Performance testing measures the efficiency with respect to time and hardware resources of the test item under typical usage. This assumes that a set of nonfunctional requirements regarding performance exist in the items specification. Stress Testing: Stress testing evaluates the performance of the test item during extreme usage patterns. Typical examples of extreme usage patterns are large data sets, complex calculations, extended operation, limited system resources, etc. Configuration Testing: Configuration testing evaluates the performance of the test item under a range of system configurations. Relevant configuration issues depend upon the particular product and may include peripherals, network patterns, operating systems, hardware devices and drivers, user settings.

Testing Client Server Applications


To test the server based applications, you need to perform typical tests like Volume Testing, Stress Testing, Performance Testing, Recovery Testing, Back up and Restore Testing, Security Testing etc. The Stress Testing shows that the system has the capacity to handle large numbers of processing transactions during peak periods. An example of a peak period is when everyone is logging back onto an on-line system after it has been down. You will need to perform Volume Testing to find weaknesses in the system with respect to its handling of large amounts of data during short time periods. Basically, this kind of testing ensures that the system will process data across physical and logical boundaries such as across servers or across disk partitions on one server. To assess performance under all load conditions, you will need to perform Performance Testing parallel with Volume and Stress testing. System performance is generally assessed in terms of response times and throughput rates under differing processing and configuration conditions. If you have identified any business processing cycles like month-end or Quarter-end etc., the system performance should be tested under emulations of each processing cycle.

Also, Performance testing should cover performance under all hardware and software system configurations. One myth about the Client / Server performance problems is that This problem can be fixed by simply plugging in a more powerful processor." As a tester you will need to convey the truth that performance degradation may be related to other system components not just the processor. Problem may be the network, the computer, the application logic itself. Whatever it is, as a good tester, you'll want to know it before it happens. Before starting testing the interaction between the application and the network, look at cache settings, disk I/O, and network cards. From the development team ask the questions like:
y y

How much application logic should be remotely executed? How much updating should be done to the server database over the network from the client workstation?

Look at all of the processes running on the machine and all of the resources each process receives. One suggestion for developers is that they should design the applications that most of the processing should be on client machine (90% or even more). Regardless of performance testing strategy, it is important to have the right tools. Sometimes, we need to have multiple tools to do performance testing (Form performance testing, I mean to say testing the performance of application and analyzing the database performance). As performance testing tools are very costly and performance testing is not necessary for every application. So many companies do not buy these tools. Now, what can be done if you don't have any automated tool. In such situations, you can take some help from developers. You can ask them to write some driver programs in Java or .net. The program should initiate each of 10 or 20 or 50 SQL queries or classes designed for the application. Each program should also be able to display the name of each query or class and compile time etc. This approach is bit tricky. I will explain it in some of my other article. The best approach, however, is to use good test tools like LoadRunner etc. Apart from performance testing, the other tests can be like Data Security Testing, Data Backup Testing and data recovery testing. For Data Security Testing, controlled tests with third party tools needs to be conducted separately or as part of the system test. For example tools such as SQLEYE can be used to monitor and restrict database access with third party tools. As you might know that Error Trapping logic can record the problem, by-pass the corrupt data, and continue processing as an alternative to a system shut down. It is your responsibility to assure that the system can properly trap errors.

The defined backup and restore procedures should be tested as part of server testing. While testing these procedures, some of the following points need to be considered:
y y y y y

How often and when should the backups be done? What is the backup medium (cartridge, disk)? Will the backups be manual or automated? How will it be verified that the backups occur without errors? How long and where will backups be saved? How long will it take to restore the last backup?

The data layer of Client-Server applications can be difficult to test because their functionality is "hidden" from direct testing through the GUI. Stored procedures and data base triggers are best tested using counters and profilers.

CLIENT / SERVER TESTING This type of testing usually done for 2 tier applications (usually developed for LAN) Here we will be having front-end and backend. The application launched on front-end will be having forms and reports which will be monitoring and manipulating data E.g: applications developed in VB, VC++, Core Java, C, C++, D2K, PowerBuilder etc., The backend for these applications would be MS Access, SQL Server, Oracle, Sybase, Mysql, Quadbase The tests performed on these types of applications would be - User interface testing - Manual support testing - Functionality testing - Compatibility testing & configuration testing - Intersystem testing WEB TESTING This is done for 3 tier applications (developed for Internet / intranet / xtranet) Here we will be having Browser, web server and DB server. The applications accessible in browser would be developed in HTML, DHTML, XML, JavaScript etc. (We can monitor through these applications) Applications for the web server would be developed in Java, ASP, JSP, VBScript, JavaScript, Perl, Cold Fusion, PHP etc. (All the manipulations are done on the web server with the help of these programs developed) The DBserver would be having oracle, sql server, sybase, mysql etc. (All data is stored in the

database available on the DB server) The tests performed on these types of applications would be - User interface testing - Functionality testing - Security testing - Browser compatibility testing - Load / stress testing - Interoperability testing/intersystem testing - Storage and data volume testing A web-application is a three-tier application. This has a browser (monitors data) [monitoring is done using html, dhtml, xml, javascript]-> webserver (manipulates data) [manipulations are done using programming languages or scripts like adv java, asp, jsp, vbscript, javascript, perl, coldfusion, php] -> database server (stores data) [data storage and retrieval is done using databases like oracle, sql server, sybase, mysql]. The types of tests, which can be applied on this type of applications, are: 1. User interface testing for validation & user friendliness 2. Functionality testing to validate behaviors, i/p, error handling, o/p, manipulations, services levels, order of functionality, links, content of web page & backend coverages 3. Security testing 4. Browser compatibility 5. Load / stress testing 6. Interoperability testing 7. Storage & data volume testing A client-server application is a two tier application. This has forms & reporting at front-end (monitoring & manipulations are done) [using vb, vc++, core java, c, c++, d2k, power builder etc.,] -> database server at the backend [data storage & retrieval) [using ms access, sql server, oracle, sybase, mysql, quadbase etc.,] The tests performed on these applications would be 1. User interface testing 2. Manual support testing 3. Functionality testing 4. Compatibility testing 5. Intersystem testing Some more points to clear the difference between client server, web and desktop applications: Desktop application: 1. Application runs in single memory (Front end and Back end in one place) 2. Single user only Client/Server application:

1. Application runs in two or more machines 2. Application is a menu-driven 3. Connected mode (connection exists always until logout) 4. Limited number of users 5. Less number of network issues when compared to web app. Web application: 1. Application runs in two or more machines 2. URL-driven 3. Disconnected mode (state less) 4. Unlimited number of users 5. Many issues like hardware compatibility, browser compatibility, version compatibility, security issues, performance issues etc. As per difference in both the applications come where, how to access the resources. In client server once connection is made it will be in state on connected, whereas in case of web testing http protocol is stateless, then there comes logic of cookies, which is not in client server. For client server application users are well known, whereas for web application any user can login and access the content, he/she will use it as per his intentions. So, there are always issues of security and compatibility for web application. Over to you: On which application are you working? Desktop, client-server or web application? What is your experience while testing these applications? CLIENT / SERVER TESTING This type of testing usually done for 2 tier applications (usually developed for LAN) Here we will be having front-end and backend. The application launched on front-end will be having forms and reports which will be monitoring and manipulating data E.g: applications developed in VB, VC++, Core Java, C, C++, D2K, PowerBuilder etc., The backend for these applications would be MS Access, SQL Server, Oracle, Sybase, Mysql, Quadbase The tests performed on these types of applications would be - User interface testing - Manual support testing - Functionality testing - Compatibility testing & configuration testing - Intersystem testing WEB TESTING This is done for 3 tier applications (developed for Internet / intranet / xtranet)

Here we will be having Browser, web server and DB server. The applications accessible in browser would be developed in HTML, DHTML, XML, JavaScript etc. (We can monitor through these applications) Applications for the web server would be developed in Java, ASP, JSP, VBScript, JavaScript, Perl, Cold Fusion, PHP etc. (All the manipulations are done on the web server with the help of these programs developed) The DBserver would be having oracle, sql server, sybase, mysql etc. (All data is stored in the database available on the DB server) The tests performed on these types of applications would be - User interface testing - Functionality testing - Security testing - Browser compatibility testing - Load / stress testing - Interoperability testing/intersystem testing - Storage and data volume testing A web-application is a three-tier application. This has a browser (monitors data) [monitoring is done using html, dhtml, xml, javascript]-> webserver (manipulates data) [manipulations are done using programming languages or scripts like adv java, asp, jsp, vbscript, javascript, perl, coldfusion, php] -> database server (stores data) [data storage and retrieval is done using databases like oracle, sql server, sybase, mysql]. The types of tests, which can be applied on this type of applications, are: 1. User interface testing for validation & user friendliness 2. Functionality testing to validate behaviors, i/p, error handling, o/p, manipulations, services levels, order of functionality, links, content of web page & backend coverages 3. Security testing 4. Browser compatibility 5. Load / stress testing 6. Interoperability testing 7. Storage & data volume testing A client-server application is a two tier application. This has forms & reporting at front-end (monitoring & manipulations are done) [using vb, vc++, core java, c, c++, d2k, power builder etc.,] -> database server at the backend [data storage & retrieval) [using ms access, sql server, oracle, sybase, mysql, quadbase etc.,] The tests performed on these applications would be 1. User interface testing 2. Manual support testing 3. Functionality testing

4. Compatibility testing 5. Intersystem testing Some more points to clear the difference between client server, web and desktop applications: Desktop application: 1. Application runs in single memory (Front end and Back end in one place) 2. Single user only Client/Server application: 1. Application runs in two or more machines 2. Application is a menu-driven 3. Connected mode (connection exists always until logout) 4. Limited number of users 5. Less number of network issues when compared to web app. Web application: 1. Application runs in two or more machines 2. URL-driven 3. Disconnected mode (state less) 4. Unlimited number of users 5. Many issues like hardware compatibility, browser compatibility, version compatibility, security issues, performance issues etc. As per difference in both the applications come where, how to access the resources. In client server once connection is made it will be in state on connected, whereas in case of web testing http protocol is stateless, then there comes logic of cookies, which is not in client server. For client server application users are well known, whereas for web application any user can login and access the content, he/she will use it as per his intentions. So, there are always issues of security and compatibility for web application. Over to you: On which application are you working? Desktop, client-server or web application? What is your experience while testing these applications?

How to Test Banking Applications


Banking applications are considered to be one of the most complex applications in todays software development and testing industry. What makes Banking application so complex? What approach should be followed in order to test the complex workflows involved? In this article we will be highlighting different stages and techniques involved in testing Banking applications.

The characteristics of a Banking application are as follows:


y y y y y y y y y y

Multi tier functionality to support thousands of concurrent user sessions Large scale Integration , typically a banking application integrates with numerous other applications such as Bill Pay utility and Trading accounts Complex Business workflows Real Time and Batch processing High rate of Transactions per seconds Secure Transactions Robust Reporting section to keep track of day to day transactions Strong Auditing to troubleshoot customer issues Massive storage system Disaster Management.

The above listed ten points are the most important characteristics of a Banking application. Banking applications have multiple tiers involved in performing an operation. For Example, a banking application may have: 1. 2. 3. 4. Web Server to interact with end users via Browser Middle Tier to validate the input and output for web server Data Base to store data and procedures Transaction Processor which could be a large capacity Mainframe or any other Legacy system to carry out Trillions of transactions per second.

If we talk about testing banking applications it requires an end to end testing methodology involving multiple software testing techniques to ensure:
y y y y y y

Total coverage of all banking workflows and Business Requirements Functional aspect of the application Security aspect of the application Data Integrity Concurrency User Experience

Typical stages involved in testing Banking Applications are shown in below workflow which we will be discussing individually.

1) Requirement Gathering:
Requirement gathering phase involves documentation of requirements either as Functional Specifications or Use Cases. Requirements are gathered as per customer needs and documented by Banking Experts or Business Analyst. To write requirements on more than one subject experts are involved as banking itself has multiple sub domains and one full fledge banking application will be the integration of all. For Example: A banking application may have separate modules for Transfers, Credit Cards, Reports, Loan Accounts, Bill Payments, Trading Etc.

2) Requirement Review:
The deliverable of Requirement Gathering is reviewed by all the stakeholders such as QA Engineers, Development leads and Peer Business Analysts. They cross check that neither existing business workflows nor new workflows are violated.

3) Business Scenario Preparations:


In this stage QA Engineers derive Business Scenarios from the requirement documents (Functions Specs or Use Cases); Business Scenarios are derived in such a way that all Business Requirements are covered. Business Scenarios are high level scenarios without any detailed steps, further these Business Scenarios are reviewed by Business Analyst to ensure all of Business Requirements are met and its easier for BAs to review high level scenarios than reviewing low level detailed Test Cases.

4) Functional Testing:
In this stage functional testing is performed and the usual software testing activities are performed such as: Test Case Preparation: In this stage Test Cases are derived from Business Scenarios, one Business Scenario leads to several positive test cases and negative test cases. Generally tools used during this stage are Microsoft Excel, Test Director or Quality Center. Test Case Review: Reviews by peer QA Engineers Test Case Execution: Test Case Execution could be either manual or automatic involving tools like QC, QTP or any other.

5) Database Testing:
Banking Application involves complex transaction which are performed both at UI level and Database level, Therefore Database testing is as important as functional testing. Database in itself is an entirely separate layer hence it is carried out by database specialists and it uses techniques like
y y y y y y y

Data loading Database Migration Testing DB Schema and Data types Rules Testing Testing Stored Procedures and Functions Testing Triggers Data Integrity

6) Security Testing:
Security Testing is usually the last stage in the testing cycle as completing functional and non functional are entry criteria to commence Security testing. Security testing is one of the major stages in the entire Application testing cycle as this stage ensures that application complies with Federal and Industry standards. Security testing cycle makes sure the application does not have any web vulnerability which may expose sensitive data to an intruder or an attacker and

complies with standards like OWASP. In this stage the major task involves in the whole application scan which is carried out using tools like IBM Appscan or HP WebInspect (2 Most popular tools). Once the Scan is complete the Scan Report is published out of which False Positives are filtered out and rest of the vulnerability are reported to Development team for fixing depending on the Severity. Other Manual tools for Security Testing used are: Paros Proxy, Http Watch, Burp Suite, Fortify tools Etc. Apart from the above stages there might be different stages involved like Integration Testing and Performance Testing. In todays scenario majority of Banking Projects are using: Agile/Scrum, RUP and Continuous Integration methodologies, and Tools packages like Microsofts VSTS and Rational Tools. As we mentioned RUP above, RUP stands for Rational Unified Process, which is an iterative software development methodology introduced by IBM which comprises of four phases in which development and testing activities are carried out. Four phases are: i) Inception ii) Collaboration iii) Construction and iv) Transition RUP widely involves IBM Rational tools. In this article we discussed how complex a Banking application could be and what are the typical phases involved in testing the application. Apart from that we also discussed current trends followed by IT industries including software development methodologies and tools. http://www.perlmonks.org/?node_id=350829 (Client server testing link)

http://www.softwaretestinghelp.com/testing-banking-applications/ http://www.softwaretestinghelp.com/database-testing-%E2%80%93-practical-tips-and-insighton-how-to-test-database/

http://www.softwaretestinghelp.com/test-plan-template/

http://www.softwaretestinghelp.com/how-to-write-effective-test-cases-test-cases-proceduresand-definitions/

http://www.softwaretestinghelp.com/tips-to-design-test-data-before-executing-your-test-cases/ \\ http://www.softwaretestinghelp.com/sql-injection-%E2%80%93-how-to-test-application-forsql-injection-attacks/

http://www.softwaretestinghelp.com/security-testing-of-web-applications/

http://www.softwaretestingstuff.com/2008/06/test-strategy.html

You might also like