You are on page 1of 27
Chapter Outcomes... Apply specified testing level for the given web based application. Apply acceptance testing for given web based application. Apply the given performance testing for the specified application. Generate test cases for the given application using regression and GUI testing. To understand Levels of Testing To study various Types of Testing such as Unit Testing, Integration Testing, Acceptance We already known software testing is a process of a a pro intent of finding the software bugs. Levels of testing include the different methodologies that ¢ Different levels of testing are sad in the testing aspects of the software system. The main levels of software te are Functional testing activities that verify a specific: or function of the code. tests tend to answer the qu “can the user do this" or action, such as perfor esting, especially for large systems, is usually carried out at different ill be3 to 4 levels, or major phases of testing i.e., unit test, integration test, system test, acceptance test as shown in Fig. 2.1. [2.1] levels. In most cases there Types and 22 leo Solware Testin 2 24 shown in Fig, * Following are the main levels of software testing Aces where individual units/eompong, testing URE Tenting a x ated ce oe si rovalldate that each unit ofthe SOfWare perp ° ose is software/system are tested. The Purp? designed rware testing process where individual yn 2. Integration Testing is a level of the S0 combined and tested as a group. The Purpos interaction between integrated units. 3. System Testing is a level of the s system/software is tested. The purpose of this test ! the specified requirements. 4. Acceptance Testing is a level of the softwa acceptability. The purpose of this test is to eval eenane and a whether it is acceptable for delivery. eis iy ve of this level of testing 18 to expose fayiy a ! oftware testing process where a complete, inte sto evaluate the system's compliance re testing process where a system is te, uate the system’s compliance with the bas UNIT TESTING (DRIVER : ees ed unit testing or module testing. Unit ¢, st part of software sytenll * Testing that occurs at the lowest level is calle A performed to test the individual units of software. Unit is the smal is testable. * Unit testing may be include code file, classes, methods which can be tested iy ise individually for correctness. —— | ode tested The primary goal of unit testing is to take the smallest piece of testable : software in the application, isolate it from the remainder of the code, and _ determine whether it behaves exactly -as you expect. In unit testing each unit is tested irately before integrating them into modules to test the interfaces modules. {2,2 shows concept unit testing. performs the following functions: approach to unit testing requires drivers and stubs to be written. Driver ise arrangements, generally code, required to test units individually lodule and can take output from unit/module. the stub simulates a called unit. A component is not a st vare must often be developed for each unit test. ¢ tomiog ye Testing sin ee 23 Y <2 sypeo om in most applications a driver is nothing more fear thana “main program” that accepts test case data, passes such data to the component (to be tested), and prints bee relevant results, Stubs serve to replace modules that Bound indepen are subordinate (invoked by) the component to be tested. Astub or “dummy subprogram” uses the sub-ordinate module's interface, may do minimal data mani. pulation, prints verification of entry, and returns control to the module undergoing testing. poth are software that must be written (formal design is not commonly applied) but that is not delivered with the final software product. Fig.23: stub and Driver + Fig.2.3shows stub and driver concept in unit testing. : Example: + Atest driver can replace the real software and more efficiently test a low-level module. Drivers send test-case data to the modules under test, read back the results, and verify that they are correct. Fig. 2.4: Test Driver ‘A test stub sends test data up to the module being tested. A low-level interface module is used to collect temperature data from an electronic thermometer. A display module sits right above the interface, reads the data from the interface, and displays it to the user. To test the top-level display module, you'd need blow torches, water, ice, and a deep freeze to change the temperature of the sensor and have that data passed up the line. Softenre Testing Difference between Driver and Stub) aS Lp Drivers. mainly created for integration such as bottom-up approach. 2__| Very simple todevelop. _ e | A driver is basically a program that accepts test case data and passes that data to the module that is being tested. _ A driver is a piece of software program which calls the functions in the unit under Stubs are mainly created for int testing such as top-down approach Itis also very simple to develop. Stubs are also programs that are y, replace modules that are subordinaty module to be tested. Oth ‘A stub is a small program Touting substitutes for a longer program, p be loaded later or that is located remo test. | ST stub isthe piece of code emulating the called | Driver is plece of code emulating a aj | function. function. Advantages and Disadvantage of Unit Testing Advantages of Unit Testing: 1. Unit testing provides a sort of living documentation of the system. Developers looking t what functionality is provided by a unit and how to use it can look at the unit tests to gaing understanding of the unit's interface (API). Unit testing may reduce uncertainty in the units testing style approach. 2: 3 4. still works correctly. Disadvantages of Unit Testing: 1 2 Unit tests find problems early in the development cycle. Unit testing allows the programmer to refactor code at a later date, and make sure the y The biggest disadvantage of unit-testing is the initial time required to develop them, Testing will not catch every error in the program, since it cannot evaluate every themselves and can be used ina in any but the most trivial programs. The same is true for unit testing. 3, errors. Another problem related to writing the unit tests useful tests. If the code is built and tested what will happen? 1 in pieces it would work properly and gradually wha you put. into groups or linked the units together and then tested the whole group and if it won't Unit testing by definition only tests the functionality of the units themselves. not catch integration errors or broader system-level errors. Unit testing should be done in conjunction with other software testing activities, as th only show the presence or absence of particular errors; they cannot prove a complete. is the difficulty of setting up yeaa Integration is a process by which components are aggregated to create larger componen the data flow or interface between two features is known as integration testing. that occurs at the lowest level is called unit/module testing. As the units are te ugs are found and fixed, they are integrated and integration testing is perform (See Fig. 2.6). m level. at module level where different units and components come to testing. Fig. 2.6 shows a system schematically. a ——___u ___rypen end tere ot 1 integration testing mainly foc nt units odules and/or system ete, s*® O" VO protocols, parameters passing between differe a Fig. 2.6: Integration Testing with System Levels «The process of incremental testing continues, putting together more and more pieces of the software until the entire productor at least a major portion of it is tested at once in a process called system testing. + With this testing strategy, it's much easier to isolate bugs. When a problem is found at the unit level, the problem must be in that unit. If a bug is found when multiple units are integrated, it must be related to how the modules interact. + Of course, there are exceptions to this, but by and large, testing and debugging is much more efficient than testing everything at once, working of Integration Testing: + Once, unit testing is complete, integration testing begins. + In integration testing, the units validated during unit testing are combined to form a subsystem. + The integration testing is aimed at ensuring that all the modules work properly as per the user requirements when they are put together ie. integrated. + The objective of integration testing is to take all the tested individual modules, integrate them, test them again, and develop the software, which is according to design specifications. + Fig.2.7 shows integration testing. Advantages of Integration Testing: 1. The unit modules may not be tested as per the changes made to the lead to error page when integration testing is done. 2. tis easy to fix the error in the integration testing when compared to the system: 3. The interfaces should be checked thoroughly if any error messages are shown, Top-Down Integration ‘Component code itreaches the final component of the system. : 26 __Types and Levels of , Software, sting = 2 ‘, ——~ ation testing technique which begins by testing the "ey . red as incremental integr nl level module one-by-one. dule adds in lower . ae ee normally simulated by stubs which mimic functionality of lower \ modules. As you add lower level code, stubs will ae ce an gates a ach needs design and implementation of stul ers ma ‘ Tep down tornados eve phase lacs diver friar has hl one may nag design stubs to take care of lower level components which are not available at that time, inthis approach, top-level components are the user interfaces which are created first, to elicit requirements or creation of prototype. ey + Approaches like proto- typing, formal proof of concept, test driver development etc. uses top-down approach for testing * Top-down integration can be performed and tested in breadth first or depth first manner. A top-down integra- tion and testing can be Fig. 2.8: Top-Down Integration shown in Fig.2.8. + In depth first all modules on a construct path are integrated first and breadth first all module directly subordinate at each level are integrated together. Advantages of Top-Down Integration Testing: 1. Inthis approach feasibility of an entire program can be determined easily ata very early s the top most layer, generally User Interface (Ul) is made first. This approach is good if th application has user interface as a major part. 2. Number of times top-down approach does not need drivers as the top layers are available which can work as drivers for the layers below. ‘ |. This approach can detect major flaws in system designing by taking inputs from u Prototyping is used extensively in agile application development where user requirement clarified by preparing a model. If software development is considered as an activity as: with user learning, then prototyping given an opportunity to the user to learn things. This approach provides early working module of program and so design defects can be foun and corrected early rantages of Top-Down Integration Testing: isiapproach units and modules are rarely tested alone before their integration. There n ividual units/modules which may get compensated/camouflaged in tes ot be found in top-down integration. tala ee 7 an yn pottom-Up Integration ition. pottom-up integration can be considered as the opposite of top-down integra + jottom-up integration testing approach focuses on testing the bottom part/individual + nodules and then goes upward by integrating testing and working units and modules testing and inter-system testing ein | jn bottom-up integration each sub-system i tested separately and then the full system ign well {ubsystem must consist of many modules which communicate among each other th interfaces, defined inter! a is approach control and data interfaces are tested. Bottom-up integration testing starts at 2 J units for syste™ +m oles evel atomiemlesare the lmestlevels inthe program UCT ngs pottom-up approach generally used for object oriented design and general purpose utility PoUST” * Apotiom-up integration simplemente withthe following steps: a rept: 1oW level modules are combined into casters that perform a specific software function. These clusters are sometimes called builds. step? A drive (acontrol program for testing) is written to coordinate testcase input and output: step3: The buildistested. step4: Drivers are removed and clusters are combined moving upward in the program structure: rig. 29 shows how the bottom up. integration is done. Whenever, a new module is Nided t0 a8 a part of testing, the program structure changes. There may be a new data flow paths, some new Input/Output {1/0} or some new control logic. | These changes may cause problems with functions in the tested modules, which were working fine previously. To detect these errors regression testing is done. integration (a) Program Module Fig.2.2 Advantages of Bottom-up Integration Approach: In this approach each components and unit is tested first for its correctness. working correctly then only it goes for further integration. * . Incremental integration testing is useful where individual ‘components integration. ., This approach makes a system more robust since individual units are working. isadvantages of Bottom-up Integration Approach: ‘L_Inthis approach, top-level components are most important but tested delivery may cause problem of not completing ‘testing. 2. There can be major problems during integration of interface testing or: may bea problem. In bottom-up integration, objects are combined one at a time, which may result into slow testing. Time required for complete testing may be very disrupt entire delivery schedule. Sh __—_Types Levee o¢ 4. Designing and writing stubs and drivers for testing is waste of work as they do nt ar final system, In this approach, stub and drivers are to be written and tested before using them in ing, testing, One needs to maintain review and test records of stubs and driver to ensure thay not introduce any defect. 6. For initial phases, in this approach one may need both stubs and drivers. AS one » | integrating units, orignal stubs may be used while large number of new dive ml Software Testing is 28 ta Sy) required. reg Bi-Directional Integration aCe) * Bi-directional integration also referred as mixed integration testing or sandwich testing, * Bi-directional integration is a kind of integration testing process that combine top-dows bottom-up integration testings. + Sandwich testing defines testing into two parts and follows both parts starting from both top-down approach and bottom-up approach either simultaneously or one after another, In top-down approach testing can start only after the top-level modules have been codeq ‘tested, Similarly bottom up testing can start only after the bottom level modules. are ready, Sandwich approach overcomes this shortcoming of top-down and bottom-up appro sandwich integration approach, testing can start as and when modules become available, this is one of the most commonly used integration testing approach. : Sandwich integration testing is also a vertical incremental testing strategy that tests the layers and top layers and tests the integrated system in the computer software development pp + Using stubs, it test the user interface in isolation as well as tests = wy the lower level functions using . drivers. \ ) * Fig. 2.10 shows a sandwich testing unity unit approach, Fig. 2.10: Sandwich Testing Approaches Process of Sandwich Testing: The sandwich testing approach follows the following steps: Step1: In this approach, bottom-up testing starts from middle layer and goes upward layer. Generally, for very big systems, bottom-up approach starts a sub-system goes upwards. Step2: In sandwich testing, top-down testing starts from middle layer and goes d Generally, for very big system, top-down approach, starts at subsystem level downwards. Step3: In this approach, big-bang approach is followed for the middle layer. From bottom-up approach goes upwards and top-down, approach goes downwards. Advantages of Sandwich Integration Testing: 1. In this integration, both top-down and bottom-up approaches start at a time as p schedule. Units are tested and brought together to make a system. downwards. 2. This approach is useful for very large projects having several sub-projects. follows a spiral model and the eats iteltiseslegeed a system. | of Sandwich Integration Testin, Levels of Testing delivered to the + System testing repré System testing represents the final testing dane on a software system before it is | } requirements such as uirement specification. + In system testing, the system is tested against functional/non-functional accuracy, reliability, and speed defined by the user/customer in software req! | + The goal of system testing is to find defects in features of the system compared to the way it has been defined in the software system requirements. The test object is the fully integrated system + IEEE defines system testing as‘a testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirement + Fig, 2.11 shows system testing concept. Benefits of System Testing: = . 1. System tests help clearly specify Fig. 2.11: Concept of System Testing . how the application should behave. 2. System tests can be run automatically (for example, each night) so that testing is done as the” application is being developed. . 3. System tests help you test that the application is working correctly from the point of view of a user. 4 System tests help you test that changes made to one part of the application haven't created a bug. somewhere else, TESTING ON WEB PAGE APPLICA’ ‘A web based app! an application which can be ‘accessed and used over the network Internet, Intranet or Extranets. Internet is the worldwide collection of interconnected networks. Intranets are the networks which are used within the organizations by internal employees extranets are the networks which are used by organization internally as well as by the b partners of the organization. Web-based architecture is an extension of client/server architecture. There is a difference ‘between both the architectures. In client/server architecture the client workstation have the application software which is used to communicate to the a application server but in the web-based a application the client machines have the web browsers and these client machine are a networked to the web server by either LAN . (Local Area Network) or WAN (Wide Area ® a Network) Let see what all testing is to be carried out on oe in software web testing, The testing is totally based on your web testing requirements but flowing 's the sania ceca af web ay typos ana yer ot Tesi i level, the ost |. performance testing occurs th it “ommend ce ben ee However, it is not until all syste of a oth hardware and software instrumentation. performance testing, a non-functional testing technique performed to determine the system parameters in terms of responsiveness and stability under various workload. Performance testing measures the quality attributes of the system, such as scalability, reliability and resource usage + The testing to evaluate the response time (speed), throughput and utilization of system to execute its | required functions in comparison with different versions of the same product or a different competitive product is called Performance Testing. performance testing is done to derive benchmark numbers for the system. Heavy load is not applied to the system. Tuning is performed until the system under test achieves the expected levels of performance. performance tests are designed to simulate real-world loading situations. As the number of simultaneous WebApp users grows, or the number of online transactions increases, or theamount of data (downloaded or uploaded) increases, performance testing will help answer the following, rf questions: 4, Does the server response time degrade to a point where it is noticeable and unacceptable? 2. At what point (in terms of users, transactions, or data loading) does performance D unacceptable? 3, What system components are responsible for performance degradation? 4, Is WebApp reliability or accuracy affected as the load on the system grows? 5, What happens when loads that are greater than maximum server capacity are applied? ‘why Performance Testing: Identifies problems early on before they become costly to resolve. + Reduces development cycles. + Produces better quality, more scalable code. «Prevents revenue and credibility loss due to poor Web site performance. + Enables intelligent planning for future expansion. * ‘ + Toensure that the system meets performance expectations such as response: t under given levels of load. + Expose bugs that do not surface in cursory testing, such as memory leaks, buffer overflows, etc. Factors that Governs Performance Testing: 4, Throughput: Capability of a product to handle multiple transactions in a given period. Throughput represents the number of requests/business transactions processed by the ina specified time duration. © As the number of concurrent users increase, the throughput increases almost linearly with the number of requests. As there is very little congestion within the the Application Server system queues. 2. Response Time: Sottware Teering aa 3. Tuning ‘Tuning is the procedure by which product performance is enhanced by setting differen, to the parameters of the product. operating system and other components Tuning improves the product performance without having to touch the souree code op product. 4, Benchmarking A very well-improved performance of a product makes no business sense if that perfo does not match up to the competitive products. "May A careful analysis is needed to chalk out the list of transactions to be compared across p so that an apple-apple comparison becomes possible Fed Performance Testing Techniques: 1. Load Testing: It is the simplest form of testing conducted to understand the behaviou,. system under a specific load. Load testing will result in measuring important business .4.% transactions and load on the database, application server, etc,, are also monitored. y 2. Stress Testing: It is performed to find the upper limit capacity of the system ang determine how the system performs if the current load goes well above the expected. 3. Soak Testing: Soak testing also known as endurance testing, is performed to deter system parameters under continuous expected load. During soak tests the p memory utilization is monitored to detect memory leaks or other performance issues, aim is to discover the system's performance under sustained use. 4, Spike Testing: Spike testing is performed by increasing the number of users sudd large amount and measuring the performance of the system. The main aim is to whether the system will be able to sustain the workload. Performance Testing Methodology/Process: * The methodology adopted for performance testing can vary widely but the objec performance tests remain the same. : + Itcan help demonstrate that your software system meets certain pre-defined perfo * Orit can help compare performance of two software systems. It can also help identify software system which degrade its performance, * Fig. 213 shows generic performance testing process. mine \- Plan \ Configure \.simplemer nance and test = i design / environment /- Fig. 2.13: Steps in Performance Testing Fig. 2.13 shows following steps for performance testing: 1. Identify Testing Environment: Know your physical test environment, production ¢ and what testing tools are available. Understand details of the hardware, software ai configurations used during testing before you begin the testing process. It will more efficient tests. It will also help identify possible challenges that during the performance testing procedures. Identify the Performance Acceptance Criteria: This includes goals and con throughput, response times and resource allocation. It is also necessary to iden success criteria outside of these goals and constraints. Testers should be performance criteria and goals because often the project specifications will not enough variety of performance benchmarks, Sometimes there may be none at all finding a similar application to compare to is a good way to set performance, _ Plan and Design Performance Tests: Determine how usage is likely to vary. identify key scenarios to test for all possible use cases. It is necessary to simul 1 plan performance test data and outline what metrics will be gathet % esting _—_ eo configuring the Test En, ni =e arrange tools and other resources Prepare the test eS “types ona une otTomti n tion, Also, i environment before execulio” he Performan n the Tests: Execute a; et fu monitor the txts, ding to your teat design analyze, Tune and Retest, Consolidate » ani again to see if there is an alyze and ererally grow smaller with emProvement or desea test results. Then fine fe ond oa Performance, since improver q 9 est, may have the consider option of increggi tbe ottenecking is caused by the CPU. Then you rantase’s of Performance Testing. 18 CPU power, u jtassess whether a component of 4 ‘ f syst A > Itcompares different systems to érarginge wre Prt er toteucs att 53) Load Testing Which system performs better. oad testing isa software testing technic toboth normal and extreme expected load conditi oad testing is generally performed y, lig ions, petween two different systems. When rolled laboratory conditions in order to distin resources in maximum counts itis called Slane with a load that causes it to ue Used to examine the behavior of a system when subject conditions. ithelps to identify the maximum operating capacity of an appli determine which element is causing degradation Sapte nt nance then how much CPU, memory will bs ee bye att ¢ consumed, what is the network and bandwidth Load testing can be done under controlled lab conditions t the capabiliti systems or to accurately measure the capabilities of a Taal ae Load testing involves simulating real-life user load for the target application. It how your application behaves when multiple users hits it pari a Load testing differs from stress testing, which evaluates the extent to which a system when subjected to extreme workloads or when some of its hardware or v compromised ‘The primary goal of load testing is to define the maximum amount of work without significant performance degradation. ‘ pxamples of Load Testing: 1. Downloading a series of large files from the Internet. 2. Running multiple applications on a computer or server simultaneously. 3, Assigning many jobs toa printer in a queue. ae 4, Subjecting a server to a large amount of traffic. 5. Writing and reading data to and from a hard disk continuously. Advantages of Load Testing: 1. Load testing will expose the bugs such as undetected memory overflo management bugs in your system. a. Load testing enables you to increase your uptime in yo 3. Itcan measure the performance of your internet infrastructure. For example, if you are engaged in e-commerce, then you can monitor how your business is doing especially, when. a many concurrent users who hit your site. 6 Types ond Software Testing ay i an rte the 4. Load testing will also prevent software failures, because it can predict how the system wij) a : tof tasks. when it is given large loads of files and large amoun! S. We will be able to protect your investment, because this kind of testing allows you to hay, x idea of the scalability and the performance of your software. 5 Stress Testing Ty of the software under abnormal situations * Stress testing is designed to determine the behavior i : ecute the system in such a way that abn, f * In stress testing, the test cases are designed to ext conditions arise * IEEE defines stress testing as, ‘testing conducted to eval limits of its specified requirements. + Itis running the software under less-than ideal conditions-low memory, low disk space, slow cru, slow modems and soon. hed 5 + It involves testing beyond normal operational capacity, often to a breaking point, in order to obse,, | the results. It is a form of software testing that is used to determine the stability of a given system, + It put greater emphasis on robustness, availability, and error handling under a heavy load, ray than on what would be considered correct behavior under normal circumstances. | +The goals of such tests may be to ensure the software does not crash in conditions of insufficj, computational resources (such as memory or disk space). | * The purpose behind stress testing is to determine the failure of system and to monitor how thy system recovers back gracefully. The challenge here is to set up a controlled environment befog. launching the test so that you could precisely capture the behaviour of system repeatedly, under jy most unpredictable scenarios. Examples of Stress Testing: | 1. Running several resource-intensive applications in a single computer at the same time, } 2. Flooding a server with useless e-mail messages. ig i Juate a system or component at or beyong hy 3 3. Making numerous, concurrent attempts to access a single Web site. 4, Attempting to infect a system with viruses, Trojans, spyware or other malware. Advantages of Stress Testing: 1. Stress testing indicates the expected behavior of a system when it reaches the extreme level, capacity. It executes a system till it fails. This enables the testers to determine the difference b expected operating conditions and the failure conditions. Stress testing determines the part of a system that leads to errors. It determines the amount of load that causes a system to fail. . Stress testing evaluates a system at or beyond its specified limits of performance. Difference between Load Testing and Stress Testing: F, EA constantly and steadily increasing the load | overload the existing resources with ex h till the time it reaches the | jobs are carried out in an attempt to brea ___ types ad rete ot Tos tu = —————— time focus is response | The main parameter to | Load testing is the su eee Se et testing Stress testing is the super set of load testing. ‘We measure the breakpoint of a syste™- : —— mane application for a given load | le Dowaloading 2 Series of large files from 7 cnt commer te e = Many jobs to a Printer in a Testing the application for unexpected load. Security Testing "any computer-based system that manages sensitive information and causes actions that can incorrectly harm or benefit individuals is a target for inp oe penetration widens a broad range of activities like hackers who w. onl ‘onfidential data of banks, employees who attempt to penetrate for confidential data of offices, and many more can be the reasons, security testing attempts to verify that protection mechani ene awil'p from improper penetration. i pH i she systems security must be tested for invulnerability from front attack—but must also be for invulnerability from side or back attack. During security testing, the tester pla individual who desires to penetrate the system. : pe The tester may attempt to acquire passwords through external clerical means, may the syste! with custom software designed to break down any defenses that have been co , may ove power the system by denying service to others may purposely cause system ef oping ti penetrate during recovery; may browse through insecure data, hoping to find the entry a Given enough time and resources, good security testing will ultimately penetrate a of the system designer is to make penetration cost more than the value of the info be obtained. ve security testing verifies that the system accomplishes all the security requirements the effectiveness of these security measures. security testing is a testing technique to determine if an information system. maintains functionality as intended. It also aims at verifying six basic principles as 1. Confidentiality: Means preserving authorized restrictions on information access. 2. Integrity: Means guarding against improper information modification/ distribution. 3 3, Authentication: Ensures that the individual is who he/she claims to be, but say: nothing about ; the access rights of the individual. ofan P . Authorization: Means giving someone permission to do or h t si i 3 , Availability: Ensuring timely and c nd use of 6. Non-repudiation: Ensuring that the 0 e the messages. Types and Levels Sottware Testing _ eae Example of Security Testing Techniques: 1. Spoofing Identity; Attempt to force the application to use NO a non-administrator can use? «ve or in persistent storage? r in persist 4 CAN YOR Hla ae an authentication stage? authentication; is there an option to allows this, wh ‘* a valid user's credentials 01 Can “security tokens” (e.g. a cookie) be rep! 2. Tampering with the Data: 7 Is it possible to tamper with than rehash the data? > Create invalid hashes and digital signatures to ver! Repudiation: : aan © Doconditions exist that prevent logging or al tin © Isit possible to create aa that create incorrect data in an event log? 4. Information Disclosure: = data that can be accessed only by more privileged users, ° Make the apellestan fail in a way that discloses useful information to an attacker (for error messages) , © Kill the process and then perform disk scavenging, Denial of Service (Do: F Flood a process with so much data it stops responding to valid requests, Does malformed data crash the process? 6. Elevation of Privilege: o Can you execute data as code. < cate elevated process be forced to load a command shell, which in turn will ex elevated privileges? Advantages of Security Testing: ’ a 1. Security Testing determines whether proper techniques are used to identify security risks, 2. Itverifies that appropriate protection techniques are followed to secure the system, 3. Security Testing ensures that the system is able to protect its data and maintain its ) 4, Security Testing conducts tests to ensure that the implemented security measures are layed tob} fy they are checked correctly. looking for sensitive data written tod roperly. Client/Server Testing Client/server architectures allow complex systems to be assembled from components. multiple operating systems, changing technologies and greater architectural com integration more difficult, 3 + Risks such as poor reliability, performance, configuration management, security and functional issues are more prominent. None of the risks in client/server are new, but there: a change in emphasis. Since, its purpose is to address risk, the emphasis of testing in d must change. * The complexity of client/server also makes testing more difficult. Complex systems can be de faster because they are assembled from bought-in components. 3 Testing is often estimated to be in proportion with the development cost but in client/server system, the development cost may be small compared with the overall system client/server test strategy must identify the risks of concern and define a test process ta problem is cheaper to fix if identified early, so the test pro F development process. Testing of a deliverable should occur as soon "Fig. 215 shows the the objectives, techniques and ree typos and vals of Tsing Tesponsl —__—_—1 test stages iM more traditional hose yee MMty for the mparable to 'ost-based se three test stages are directly comP! the changed emphasis in test systems, testing NB client/server is associated with integration and non-functional integration is a big issue because client components (for a simple 2-Tier system) te Systems are usually assembled from around twelve ‘ese components are usually sourced fy Perhaps twenty components for a complex architecture: cient/server architectures often use rom multiple suppliers. Although standards are emerging, combination. components which have never been used before in he total number of interfaces inv mnore likely. Assumptions made by aa make interface problems and inter-component conflicts assumptions made by another developer, ne Component from one supplier may these problems may be encountered take such problems seriously may Te time ever in your installation. Getting suppliers to who takes responsibility for integration cil because in client/server, it is the system integrs performance consistently presents a problem F i eter . r process large volumes of data across ensues yea seat prcitecturallayers may besubstantal te ms Otc ea pelays between distributed processes ma : y be only ten or twent cransaction requires hundreds of network messages, delays can Rig ese other non-functional issues such as security, backuy z , backup and recovery and system administ present risks. What was taken for granted in a mai presents prol client/server. mainframe environment often ts The test, strategy must address all these risks, but testing does not happen ‘at the end’. Testing occurs at all stages and includes reviews, walkthroughs and inspections. Developers should be responsible for the products they deliver and should test their own code. System tests should cover non-functional areas as wellas the functionality. Fig. 214 shows client-server architecture. Backend Client/Server involves component testing and integration testing followed by various ialised testing, as per scope of testing involved. client-server _ testing approaches: Type8 a4 Lovee of, pers bove rem: * Above chart shows following testings for inden ae od ion plan fr asieg ag 1. Component 1 Si e needs to define or replacdcd server “individually, Someone may have t© BO aia anata wena components while testing the component targeted De etal Way Bee oteae Peat lient simulator, while testing a client may need a se ety, by using client and server simulators at a time. 2. Integration Testing: After successful testing of Sern together to form the system and system test cases a and server is tested in integration testing. 7 pausiiia | ‘© There are several special testing involved in client-server ere aie = ven bly, 1, Performance Testing: System performance is eh ee s ents communicating with server at atime. Similarly, volume teste be for testing client-server applications. Since, number of cients 4 28 ty oom o3tem, can test the system under maximum load as well as not . Various interactions may be used for stress testing ‘ 2. Concurrency Testing: It is a very important testing for client-server ace It possible that multiple users may be accessing same record at a time Sit Concurrency t required to understand the behaviour of a system under such circum: 3. Disaster Recovery/Business Continuity Testing: When the client and server are comm cating with each other, there exists a possibility of breaking of the communication due to reasons or failure of either client or server or link connecting them. Test for disaster and business continuity may be involved to understand how system behaves in such: 1 server, clients and network, they are py executed. Communication betmeen disaster. Testing for Extended Periods: In case of client/server applications, generally server js p shut down unless there is some agreed Service Level Agreements (SLA) where server mayb down for maintenance. It may be expected that server is running 24 «7 for extended needs to conduct testing over an extended period to understand if service level of netwo server deteriorates over a time due to some reasons like memory leakage. . Compatibility Testing: Client and server may be put in different environments when are using them in production. Servers may be in different hardware, software or 0 ) expected environmental variables. Testing must ensure that performance is maintained on range of hardware and software configurations and users must be adequately protected of configuration mismatch. Similarly, any limiting factors must be informed to prospectiy jot the software has met the requirement specifications. * The main purpose of this test is to evaluate the system's compliance with the business req and verify if it is has met the required criteria for delivery to end users. * Acceptance testing is the formal testing conducted to determine whether a software system its acceptance criteria, IEEE defines acceptance testing as, 'a formal testing with respect to user needs, business processes conducted to determine whether or not a system satisfies the acceptance and to enable the user, customers or other authorized entity to determine whether or not the system.’ * Acceptance testing is, “a level of the software testi t ety. ting process where a system is c ce testing is designed to: ine whether the software is fit for the user/customer. t the software product. , Lr ae il 7 an = ere are following varius ten tampa 4, User Acceptance Testing (yaqy, ce testing: the user. UAT is also Know ek consists ofa process of veriying that a solution works for conducted by the customer eta testing, application testing or end user testing. It is to ensu pefore being Signed-off ax meeting ta angie satisfies the contractual acceptance criteria : ea : Alpha testing takes place at talaga cen and will be performed amongst the teams. internal staff, before itis released, - and involves testing of the operational system by 4, Beta Testing: This test is performed i testing a sample of the intended ee testing has been successfully performed. In beta release testing, Beta testing takes place at ae P iS customers who use the system at their own location TS’ sites, and involves testing by a group a released to other customers, ns and provide feedback, before the system advantages of Acceptance Testing: 1. This testing gives user an opportuni actually accepting i fram i embed ensure that software meets user requirements, before 2, Itiseasier and simpler to run an acceptance test compared to other of test. Itenables both users and software developers toi tee eee a This testing determines the readi operations. readiness (state of being ready to operate) of software to perform 5, Itdecreases the possibility of software failure toa large extent. pisadvantages of Acceptance Testing: : 1. The users may provide feedback without having proper knowledge of the software. 2. Users are not professional testers, they may nct be able to either discover all software failures oF accurately describe some failures, : Acceptance Testing in SDLC: + Fig. 2.16 shows the fitment of acceptance testing in the software development life cycle, + The acceptance test cases are executed against the test data or using an acceptance test script and then the results are compared with the expected ones. Acceptance Criteria: TD + Acceptance testing is a test conducted to determine if the requirements of a specifica-tion or contract are met. + Acceptance criteria may work at each stage of software development and testing, starting from. proposal stage till the point where the system is formally accepted by the customer/user. E Proposal and contract must meet the acceptance criteria so that requirement gathering phase can be initiated. The contract must contain acceptance criteria for each phase as applicable. “Acceptance criteria may be used asa basis on which exit criteria for each phase and entry criteria of next phase may be defined. : irements must pass acceptance criteria at each phase, so that which may be as per entry criteria for design (hig) Types and octagon kas Ue a_i that it can be taken for coding and coding Mt ag . la 20, Designs must fulfill acceptance criteria shat 8 oN Torn acceptance criteria so that system testing , i d by | . i £ development is completed when it is accepted by ing Similarly, each and every ge of ATT ders and the net phase caf be started external customer as well as concerned s * Acceptance testing at each stage is devised so that the problems are found in terms of ‘not meeting exit criteria’ and can be fixed immediately before a new phase starts. + Fig. 2.17 considered as a workbench approach in + For each and every phase during development . n f vrofined us exit criteria and acceptance criteria of the earlier phase. Similarly, exit criteria of qi must meet with entry criteria of the next phase. Thus, life cycle acceptance concentrates gq deliverable satisfying the input criteria of next phase or stage. ; «While defining acceptance criteria, user must do the following: 1. Acquire full knowledge of the applicable expected to be delivered by the development te them. 2. Become fully acquainted with applicatio which must be known from users perspective. 3, Understand risk/benefits of having/not having different requirements and q required by the common user. Definition: * Microsoft press defines acceptance criteria as, “conditions that a software product must accepted by a user, customer or other stakeholder”. © Google defines as, "pre-established standards or requirements a product or project must m Acceptance criteria is define as, “the list of requirements that must be satisfied prior to the accepting delivery of the product”. + Acceptance criteria are defined on the basis of the following attributes: Functional Correctness and Completeness, Data Integrity, Data Conversion, Usability, Performance, Timeliness, Confidentiality and Availability, Installability and Upgradability, Scalability, 10. Documentation. ERT apna Testing and Beta Testing «It is virtually impossible for a software developer to see how the customer will really use a Instructions for use may be misinterpreted; strange combinations of data may be regul: output that seemed clear to the tester may be unintelligible to a user in the field. * When custom software is built for one customer, a series of acceptance tests are conducted to e customer: to validate all requirements. | by the end user rather than software engineers, an a iy cceptance test can range toa planned and systematically executed series of tests. acceptance testing. life cycle, there is an entry criterion which vn in terms of installation, usage and trout Olentcahen iva a ene Testing La Tost! ct, acceptance testing c; —_____ typos and Lovete of Testing. ne eerrors that B can be conducted over cumulativ Might degrade thes T a period of weeks or months, thereby uncovering Most software product builders use 'ystem over time. a proc only the end user seems able to fing “ss called alpha and beta testing to uncover errors that Alpha Testing air eg My come ae een on * technique’: sting and involves both white and black-box testing, loyees t The company employees test the softwar ase any functions and features may Ta she main Features of Alpha testing are. 4, Outside users are not involved while testing. 2. White box and black-box practices Te 3, Developers are involved. ‘ipha testing is performed at the deve . r's gnoulder as they use the system to aa gens) the developer checking over the customer's alpha testing 18 considered as a form of internal acceptance testing in which the users test the software at the developer's site. In other words, this testing assesses the performance of the software in the environment in which it is developed, on completion of alpha testing, users report the errors to software developers so that they can correct them. functionality and give the feedback. After this testing led to the software. Fig. 2.18 shows alpha testing advantages of Alpha Testing: 1, Provides better view about the reliability of the software at an early stage. 2. Helps simulate real time user behavior and environment. 3, Detect many showstopper or serious errors. 4, Ability to provide early detection of errors with respect to design and functionality. pisadvantages of Alpha Testing: 1. Indepth functionality cannot be tested as software is still under development stage. 2. Sometimes, developers and testers are dissatisfied with the results of alpha testing, ie Beta Testing + Beta testing is the term used to describe the external testing process in which the software toa select group of potential customers who use it in a real-world environment. Beta testing usually occurs toward the end of the product development cycle and ide bea validation that the software is ready to release to real customers. Beta tests can bea good way to find compatibility and configuration bugs. chosen a good mix of experienced and inexperienced find anything that's © + Beta testing is performed to know whether the developed software satisfies user requirements and fits within the business processes. Fig, 219 shows beta testing. Both alpha and beta testings are very important while checking the software functionality and are necessary to make sure that all users’ ___13002 4 Levi ot, a requirements are met in the most efficient way. ‘Advantages of Beta Testing: 1. Beta testing allows a company to test post-launch infrastructure. Reduces product failure risk via customer validation. Improves product quality via customer feedback. Cost effective compared to similar data gathering methods. Creates goodwill with customers and increases customer satisfaction. 2 3 4 5 Disadvantages of Beta Testing: 1. Test management is an issue. As compared to inside a company in a controlled environment, beta testing is executed out in the where you seldom have control. 2. Finding the right beta users and maintaining their participation could bea challenge, _ Fig. 2.19: Beta Testing Approach other testing types which are 7 ‘Beta Test Beta testing is performed by Clie usually internal employees of the | users who are not emplo organization organization. 2, | Alpha testing performed at developer's site. _| Beta testing is performed at client locati oa end user of the product. & Reliability and security testing are not | Reliability, security, robustness are performed in depth alpha testing. during beta testing. ih 4. | Alpha testing involves both the white-box | Beta testing typically uses black-b and black-box techniques. 6: Alpha testing requires lab environment or| Beta testing doesn't Tequire testing environment. environment or testing env Software is made available to the | 6. | Long execution cycle may be required for | Only few weeks of execution are 1 alpha testing. beta testing. 7. _ | Critical issues or fixes can be addressed by re of the issues or feedback developers imme-diately in alpha testing. | from Beta testing will be impl Alpha testing is to ensure the quality of the product before moving to beta testing. Beta testing also concentrates on qu the product, but gathers users rtain ty here are certain types of tose s are planned and documer hat . t are general est Beneraly vi tess, but they have $Pecific applica corting wy weed to Nn she special tests may have differeny gn "same. Mes in diffey “ent organizati ally the 73) Regression Testing ‘Banizations, but the types are basically tis a powerfy regression test Is a powerful tool desi, ‘BNed to test r} sic concept behind i licati, The to perform intended vais a tn involes 2ekction afer the introduction of chan es ao eects chee tases £¥E8 With improves 8 Procedures that ensure the software's pegression iatialg fie selective retening oh et FM fcations. caused unin effects and that the system ane System to verify that modification have not ‘the purpose oleae testing is to ensure tha ‘complies with its specified requirements. features or modifying existing features, haye mvt chaH8es made to software, such as adding new: should not change. "Ve not adversely affected features of the software that regression testing is also a form verificat wotksas expected even after undergoingar ee : ‘ome under the special category. These Same rules and standards as the other types Of a regression test without fal regression testing can be defined as, * ‘ftware program to install confidence te aoa maintenance task performed on a unchanged portions of the software program? \ges are correct and have not adversely affe | Whenever, you introduce a modification or regression testing without fail. any modi negative effect on your system and data Nodifications and changes could also lead to unwanted errors that you must remove I into the system. In addition, the overall functionali errors that occur due to significant changes to there wag the regression test suite (the subset of tes % cases: oe ce eae 1. Arepresentative sample of tests that will exercise all software functions, 2. Additional tests that focus on software functions that are likely to be affé 3, Tests that focus on the software components that have been. changed. + When you add new codes to your application, make sure that you run re Benefits of Regression Testing: 1. The foremost benefit of conducting this test to ensure the software is after introducing many changes and modifications, “le 2. The regression testing process is powerful enough to detect any error application. This test provides you a precautionary approach to safe | unforeseen errors. Hy GUI Testing * GUI stands for Graphical User Interface. GUI is the with functionality, as it may have effect on usabi software program is called its graphical user in change to the software, you will need to make su fications or changes that. you introduce could f Software Testing kar 226 - * Graphical user testing is also known as GUI testing oF ul testing. Here's a list of seven important traits common to good UI 1. Follows standards and guidelines: * The single most important user interface trait Is software follows existing standards and guidelines. + Everything is defined from when to use check boxes instead of an option button, when it's proper to use the information, warning, and critical messages as shown in Fig. 2.20. 2. Intuitive: * When you are testing a user inter things and how they might apply to judging how intuitive y software is: Is the user interface clean, not busy? The functions you need or the response you are looking for should be obvious and be there when you expect them. Fig.2.20 Is the UI organized and laid out well? Does it allow you to easily get from one another? Is what todo next obvious? At any point can you decide to do nothing or even bac} that your face, consider the following our back out? Do the menus or windows go too deep? Is there excessive functionality? Do too many features complicate your work? Do you fe you are getting information overload? 3. Consistent: + Consistency within your software and with other software is a key attribute. Users and expect that if they do something a certain way in one program, another will operation the same way. In Notepad, Find is accessed through the Search menu or by pressing F3. In WordPad, a program, it's accessed through the Edit menu or by pressing Ctrl4F. Such inconsistenci users as they move from one program to another. q Babin eet * Shortcut keys and menu selections - In Windows, pressing F1 should always get you h * Terminology and naming - are the same terms used throughout the software? Are fi consistently? For example, is Find always called Find, oris it sometimes called Sear * Placement for buttons such as OK and Cancel-Did you ever notice tl the top or left and Cancel on the right or bottom? The Mac OS p quivalents to onscreen buttons should also be consi Fc oo ot ss windows Calculator as shown; in ; de which one they need for thei faye -22 M8 WO views: Standard and Scientific. lee"* =” Or the one they're ‘most comfortable using. the oo comfortable: goftware should be comfortable to use. It should not get in the way or dohis/her work. 5 i) Appropriateness: Software should look and feel proper () Francia business applstiaaitiaay eee ee A space game, on the other hand, will have much more fl should neither be too loud nor too plain for the t tended t on (i) Error handling: A program should warn users | cto EMEP sat restore data lost because ofa mistake. (ii) Performance: Being fast is not always a good: the work has been completed and how muchis Correct: z When you are testing for correctness, you Make sure that what the GUI displays is document onscreen exactly what's saved to When you load it back, does it perf output perfectly match what's p Useful: The final trait of a good user in but also the features specified in Advantages of GUI Testing: Consistency of screen ‘Tab sequence provides a logical way Good GUI improves look and tl application by the user. Smoke Testing a Smoke testing involves testing basic | application is living and one: input. s Product is developed, The smoke-testing approach involves the following activities ts integrate bu id, +. Software components that have been translated into code aa eon includes all data files, libraries, reusable modules, and engineet are to implement one or more product functions. 2. Aseries of tests is designed to expose errors that will keep the build from pose ly per function, The intent should be to uncover errors that have the highest possibility oft software project behind schedule. The build is integrated with other builds, and the entire produet is smoKe tested g, ‘integration approach may be top down or bottom up. Inia Buds when the sofware ls relatively unstable Relatively Stable Builds after muliple rounds of | ¢ f regression tests J Vertes new functionally, ug fae nthe bul Fig. 2.23 Smoke testing provides a number of benefits when Projects. Some of them are listed below: 1. Integration risk is minimized: Because smoke tests are conducted daily, incomy other possible errors are uncovered early, there by reducing the possibility of impact when errors are uncovered, The quality of the end product is improved: Because the approach is construct smoke testing is likely to uncover functional errors as well as architectural and ¢ design errors, If these errors are corrected early, better product quality will result, 3. Progress is easier to assess: With each passing day, more of the software has b and more has been demonstrated to work. Sanity Testing * Sanity testing is performed to test the major: functionality or behaviour of the softwat When there are some minor issues with software and a new build is obtained after fi then instead of doing complete Tegression testing a sanity is performed on that build. You ¢¢ that sanity testing is a subset of regression testing. Sanity testing is done after thorough regression testing is over, it is done to make defect fixes or changes after regression testing does not break the core: 0 It is done towards the end of the product release phase. * _ Sanity testing follows narrow and deep approach with detailed * Sanity testing is like doing some specialized testing w ality is applied on complex, time ) testing -_ rv so! 2.29 a and Level of Test tests are mostly non scripted eceiving a software build, wi ae to ascertain that the be ee changes in code, or functionality, Sanity testing is ges. The goal is to determine Bs been fixed and no further issues are ‘introduced due to ese chan ‘ais fails, the build is rejected to save the proposed functionality works roughly as expected. a sanity test the t $3 ee Maier time. canity testing is used fo test in initial check up is made on th build from developer. e build after receiving from deve gen ing find out the wi This testing 1 i ncover integration teg ting issues. it findouts the uncovered error in early stage of testi: ing. ipincreases level of confidence of testers, : «nok testing VS Sanity Testing: wan) sanity after erfort ris Ras i smoke testing is performed to discover that | Sanity testing a done to check the new 1 the critical functionalities of the 5 working tine Program are | functionality / bugs have been fixed. 7 «(| The objective of this testing is to verify the | The objective of the testing is to verify ta "stability" of the system in order to proceed with more rigorous testing. This testing is performed by the developers or testers. smoke testing is usually documented or scripted. smoke testing is a subset of Regression testing. smoke testing exercises the entire system from end to end. smoke testing is like general health check up. "rationality" of the system in order to} Sanity testing is usually performed by testers. Sanity testing is usually not documented is unscripted, es proceed with more rigorous testing. 3 | Regqvese inn | Rretesting . Port is Fesyecd. ) Comprete| “fat cede is te ( D aeka Sm9 15 Pevtormd| i) Some +o e r r Neuer Sot titone ‘s ComPplere oor King, Poo terty often \nvaod51G PRIA Wary mAfication - ) aan Com be Ue e4 Sv Only 2 apes of +esaing 16 Ptetorncd he une & ine ee ‘w) main op N ' : ; ; ») Man objecive fe do Provide | Chea he Precomioneny ahpooth 40 Toes; psd | Funchonalisy: Hee Sof tus us Gries tare , owy Un geen Y) oyun Cowrlete th SMavehrm-ce nee OY 25+ sm losin form one 3 £rerd vs addeck co thoud disturbing SAousdore:

You might also like