You are on page 1of 9

http://www.softwaretestingmentor.com/test-levels/system-testing.php 20-Mar-12: System Testing: Each project may & will have different hardware & software.

Chec king Or testing the whole system so that it meets the functional & technical req uirements is called System testing. For example : Say that you are working on some telecom project ... for Hardware interface you may need Bluetooth, Serial cable etc & for software inter face you may need MS Outlook 98/2000/XP, Outlook Express 6 etc. This depends on the project . Checking for all this requirements is called system testing. System testing takes, as its input, all of the "integrated" software components that have successfully passed integration testing and also the software system i tself integrated with any applicable hardware systems. The integration testing d etects any inconsistencies between the software units that are integrated togeth er. System testing is a more limiting type of testing; it seeks to detect defect s both within the "inter-assemblages" and also within the system as a whole. System testing is actually done to the entire system against the System Requirem ent Specification (SRS). Moreover, the system testing is an investigatory testin g phase, where the focus is to have almost a destructive attitude and test not o nly the design, but also the behaviour and even the believed expectations of the customer. It is also intended to test up to and beyond the bounds defined in th e software/hardware requirements specifications. 21-Mar-12: Smoke Test: When a build is received, a smoke test is run to ascertain if the build is stabl e and it can be considered for further testing. Smoke testing can be done for testing the stability of any interim build. Smoke testing can be executed for platform qualification tests. Sanity testing: Once a new build is obtained with minor revisions, instead of doing a through re gression, a sanity is performed so as to ascertain the build has indeed rectifie d the issues and no further issue has been introduced by the fixes. Its general ly a subset of regression testing and a group of test cases are executed that ar e related with the changes made to the app. Generally, when multiple cycles of testing are executed, sanity testing may be d one during the later cycles after through regression cycles. Sanity Smoke 1moke testing originated in the hardware testing practice of turning on a new pi S ece of hardware for the first time and considering it a success if it does not c atch fire and smoke. In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is A sanity tested. test is a narrow regression test that focuses on one or a few areas of 2 smoke test is scripted--either using a written set of tests or an automated te functionality. Sanity testing is usually narrow and deep. A 3 sanity test is usually to touch every part of the application in a cursory way stSmoke test is designed unscripted. A A Sanity shallow and to determine a small section of the application is still w . It's istest is used wide. 4moke after minor conducted to ensure whether the most crucial functions of orkingtestingawill bechange. S a program work, but not bothering with finer details. (Such as build verificatio Sanity testing is a cursory testing; it is performed whenever a cursory testing n). is sufficient to prove the application is functioning according to specification 5moke testing is normal is a check up to a build of an application before taki s. This level of testinghealthsubset of regression testing. S ng it to testing in depth. sanity testing is to verify whether requirements are met or not, checking all features breadth-first.

6very Build release to the client should be preceded by smoke testing. E Smoke testing is implemented to make sure the application given to the client on 7the Build releasebe a is workingphase be followed by sanity testing. Everyfinal release mediaquick test shouldto test installation, launching and bas Smoke testing will from developer fine In sanity testing we install ic functionality navigation. the application, launch the application and navigat 8 through the is carried out emoke testing application. using the check lists. S Sanity test is implemented to make every release to the client. Smoke testing is carried out beforesure the application is ready for complete te sting. 22-Mar-12: Regression testing: Regression testing is any type of software testing that seek s to uncover new software bugs, or regressions, in existing functional and non-f unctional areas of a system after changes, such as enhancements, patches or conf iguration changes, have been made to them. The intent of regression testing is to ensure that a change, such as a bugfix, d id not introduce new faults.One of the main reasons for regression testing is to determine whether a change in one part of the software affects other parts of t he software. Some strategies and factors to consider during this process include the followin g: * Test fixed bugs promptly. The programmer might have handled the symptoms but n ot have gotten to the underlying cause. * Watch for side effects of fixes. The bug itself might be fixed but the fix mig ht create other bugs. * Write a regression test for each bug fixed. * If two or more tests are similar, determine which is less effective and get ri d of it. * Identify tests that the program consistently passes and archive them. * Focus on functional issues, not those related to design. * Make changes (small and large) to data and find any resulting corruption. * Trace the effects of the changes on program memory. 26-Mar-12: Performance testing: Performance Testing is done to determine the software chara cteristics like response time, throughput or MIPS (Millions of instructions per second) at which the system/software operates. Performance Testing is done by generating some activity on the system/software, this is done by the performance test tools available. The tools are used to crea te different user profiles and inject different kind of activities on server whi ch replicates the end-user environments. The purpose of doing performance testing is to ensure that the software meets th e specified performance criteria, and figure out which part of the software is c ausing the software performance go down. Performance Testing Tools should have the following characteristics: It should generate load on the system which is tested It should measure the server response time It should measure the throughput Qualitative attributes such as reliability, scalability and interoperability may also be evaluated. Performance testing is often done in conjunction with stress testing. Performance Testing Tools: 1. IBM Rational Performance Tester 2. Loadrunner 3. Apache jmeter 4.dbunit

The base documents for writing a testcase : 1. FDD -- Functional design document 2. BRD - Business Requirement Document

3. Technical design document. 4. Use Case document. Perturbation Testing This testing method focuses upon faults in the arithmetic expressions appearing throughout a program. By monitoring the program states that arise at each of the target expressions during testing, it is possible to solve automatically for al ternate expressions that would have yielded the same output on all tests run so far. Each such alternate expression, or perturbation, may be viewed as possibly the c orrect code that should have been written at that location. The goal of the test er must be to choose additional test inputs that distinguish between each such p erturbation and the actual code, to determine which one is really correct. Find the values of each of the alphabets. N O O N S O O N + M O O N J U N E?????????????????? Maintenance Testing is done on the already deployed software. The deployed softw are needs to be enhanced, changed or migrated to other hardware. The Testing don e during this enhancement, change and migration cycle is known as maintenance te sting. Once the software is deployed in operational environment it needs some maintenan ce from time to time in order to avoid system breakdown, most of the banking sof tware systems needs to be operational 24*7*365. So it is very necessary to do ma intenance testing of software applications. In maintenance testing, tester should consider 2 parts. Any changes made in software should be tested thoroughly. The changes made in software does not affect the existing functionality of the s oftware, so regression testing is also done. Why is Maintenance Testing required User may need some more new features in the existing software which requires mod ifications to be done in the existing software and these modifications need to b e tested. End user might want to migrate the software to other latest hardware platform or change the environment like OS version, Database version etc. which requires te sting the whole application on new platforms and environment. Load testing: Load testing is the process of putting demand on a system or devic e and measuring its response. Load testing is performed to determine a system s be havior under both normal and anticipated peak load conditions. It helps to ident ify the maximum operating capacity of an application as well as any bottlenecks and determine which element is causing degradation Load testing is the process of subjecting a computer, peripheral, server, networ k or application to a work level approaching the limits of its specifications. L oad testing can be done under controlled lab conditions to compare the capabilit ies of different systems or to accurately measure the capabilities of a single s ystem. Load testing can also be done in the field to obtain a qualitative idea o f how well a system functions in the "real world." Examples of load testing include Running multiple applications on a computer or server simultaneously. Assigning many jobs to a printer in a queue.

Subjecting a server to a large amount of e-mail traffic. Writing and reading data to and from a hard disk continuously. Load testing can be conducted in two ways. Longevity testing, also called endura nce testing, evaluates a system's ability to handle a constant, moderate work lo ad for a long time. Volume testing, on the other hand, subjects a system to a he avy work load for a limited time. Either approach makes it possible to pinpoint bottlenecks, bugs and component limitations. For example, a computer may have a fast processor but a limited amount of RAM (random-access memory). Load testing can provide the user with a general idea of how many applications or processes c an be run simultaneously while maintaining the rated level of performance. http://agiletesting.blogspot.co.uk/2005/02/performance-vs-load-vs-stress-testing .html Stress testing: Stress testing is a form of testing that is used to determine th e stability of a given system or entity. It involves testing beyond normal opera tional capacity, often to a breaking point, in order to observe the results. Str ess testing may have a more specific meaning in certain industries, such as fati gue testing for materials. Stress testing refers to a type of testing that is so harsh, it is expected to p ush the program to failure.For example, we might flood a web application with da ta, connections, and so on until it finally crashes. The fact of the crash might be unremarkable.The consequences of the crash, what else fails, what data are c orrupted and so forth, are the results of interest for the stress tester. Negative testing, which includes removal of the components from the system is al so done as a part of stress testing. Also known as fatigue testing, this testing should capture the stability of the application by testing it beyond its bandwi dth capacity. Performance STRESS TESTING LOAD TESTINGtesting PERFORMANCE TESTING is the testing, which is performed, to ascertain how the com Load testing is meant to test the system a constantly and steadily ponents of a system are performing, givenby particular situation. increasing t Under stress testing, various time it reaches the threshold limit. he load on the system till theactivities to overload the existing resources with excess jobs are carried out in an attempt to break the system down. Unit testing is a test (often automated) that validates that individual units of source code are working properly. A unit is the smallest testable part of an ap plication. In procedural programming a unit may be an individual program, functi on, procedure, etc., while in object-oriented programming, the smallest unit is a method, which may belong to a base/super class, abstract class or derived/chil d class The first test in the development process is the unit test. The source code is n ormally divided into modules, which in turn are divided into smaller units calle d units. These units have specific behavior. The test done on these units of cod e is called unit test. Unit test depends upon the language on which the project is developed. Unit tests ensure that each unique path of the project performs ac curately to the documented specifications and contains clearly defined inputs an d expected results. Unit testing is typically done by software developers to ensure that the code th ey have written meets software requirements and behaves as the developer intende d Test is not a unit test if: * It talks to the database * It communicates across the network * It touches the file system * It can t run at the same time as any of your other unit tests * You have to do special things to your environment (such as editing config file s) to run it. Unit testing frameworks:Unit testing frameworks, which help simplify the process of unit testing, have been developed for a wide variety of languages. It is gen erally possible to perform unit testing without the support of specific framewor k by writing client code that exercises the units under test and uses assertion,

exception, or early exit mechanisms to signal failure. This approach is valuabl e in that there is a non-negligible barrier to the adoption of unit testing. How ever, it is also limited in that many advanced features of a proper framework ar e missing or must be hand-coded. Hi All, Here are few points on Functional Testing. Functional Testing:It tests the functioning of the system or software i.e. What the software does. Functional testing is a type of black box testing that bases its test cases on t he specifications of the software component under test. Functions are tested by feeding them input and examining the output, and interna l program structure is rarely considered The functions of the software are described in the functional specification docu ment or requirements specification document. Functional testing is also known as component testing. Functional testing considers the specified behavior of the software. Functional testing typically involves five steps: * The identification of functions that the software is expected to perform * The creation of input data based on the function's specifications * The determination of output based on the function's specifications * The execution of the test case * The comparison of actual and expected outputs To be specific: Testing of all features and functions of a system [software, hardware, etc.] to ensure requirements and specifications are met. Functionality testing of software is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. Functionality testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic Non-functional testing is the testing of a software application for its non-func tional requirements. The names of many non-functional tests are often used interchangeably because of the overlap in scope between various non-functional requirements. For example, software performance is a broad term that includes many specific re quirements like reliability and scalability. Non-functional testing includes: * Baseline testing * Compatibility testing * Compliance testing * Documentation testing * Endurance testing * Load testing * Localization testing and Internationalization testing * Performance testing * Recovery testing * Resilience testing * Security testing * Scalability testing * Stress testing * Usability testing * Volume testing Differences between Functional and Non-functional testing: Testing the application Non-Functional Testing against business requirements. Functional testing is don Functional Testing e using the functional specifications provided by the client or by using the des Testing the application against client's and the design requirement. Non-Functi ign specifications like use cases provided byperformance team.

oning testing is done based on the requirements and test scenarios defined by th Functional e client. Testing covers: Unit Testing Smoke testing / Sanity testing Integration Testing (Top Down, Bottom up Testing) Interface & Usability Testing System Testing Regression Testing Pre User Acceptance Testing(Alpha & Beta) User Acceptance Testing White Box & Black Box Testing Non-Functional Localization Testing Globalization &Testing covers: Load and Performance Testing Ergonomics Testing Stress & Volume Testing Compatibility & Migration Testing Data Conversion Testing Security / Penetration Testing Operational Readiness Testing Installation Testing Security Testing (ApplicationSecurity, Network Security, System Security) Maintenance Testing is done on the already deployed software. The deployed softw are needs to be enhanced, changed or migrated to other hardware. The Testing don e during this enhancement, change and migration cycle is known as maintenance te sting. Once the software is deployed in operational environment it needs some maintenanc e from time to time in order to avoid system breakdown, most of the banking softwa re systems needs to be operational 24*7*365. So it is very necessary to do maintenance testing of software applications. In maintenance testing, tester should consider 2 parts: * Any changes made in software should be tested thoroughly. * The changes made in software does not affect the existing functionality of the software, so regression testing is also done.

Why is Maintenance Testing required? User may need some more new features in the existing software which requires modific ons to be done in the existing software and these modifications need to be teste d. End user might want to migrate the software to other latest hardware platform or change the environment like OS version, Database version etc. which requires te sting the whole application on new platforms and environment. Waterfall Model The waterfall model is one of the earliest structured models for software develo pment. It consists of the following sequential phases through which the developm ent life cycle progresses: * System feasibility. In this phase, you consider the various aspects of the tar geted business process, find out which aspects are worth incorporating into a sy stem, and evaluate various approaches to building the required software. * Requirement analysis. In this phase, you capture software requirements in such a way that they can be translated into actual use cases for the system. The req uirements can derive from use cases, performance goals, target deployment, and s o on. * System design. In this phase, you identify the interacting components that mak e up the system. You define the exposed interfaces, the communication between th e interfaces, key algorithms used, and the sequence of interaction. An architect ure and design review is conducted at the end of this phase to ensure that the d esign conforms to the previously defined requirements.

* Coding and unit testing. In this phase, you write code for the modules that ma ke up the system. You also review the code and individually test the functionali ty of each module. * Integration and system testing. In this phase, you integrate all of the module s in the system and test them as a single system for all of the use cases, makin g sure that the modules meet the requirements. * Deployment and maintenance. In this phase, you deploy the software system in t he production environment. You then correct any errors that are identified in th is phase, and add or modify functionality based on the updated requirements. The waterfall model has the following advantages: * It allows you to compartmentalize the life cycle into various phases, which al lows you to plan the resources and effort required through the development proce ss. * It enforces testing in every stage in the form of reviews and unit testing. Yo u conduct design reviews, code reviews, unit testing, and integration testing du ring the stages of the life cycle. * It allows you to set expectations for deliverables after each phase. The waterfall model has the following disadvantages: * You do not see a working version of the software until late in the life cycle. For this reason, you can fail to detect problems until the system testing phase . Problems may be more costly to fix in this phase than they would have been ear lier in the life cycle. * When an application is in the system testing phase, it is difficult to change something that was not carefully considered in the system design phase. The emph asis on early planning tends to delay or restrict the amount of change that the testing effort can instigate, which is not the case when a working model is test ed for immediate feedback. * For a phase to begin, the preceding phase must be complete; for example, the s ystem design phase cannot begin until the requirement analysis phase is complete and the requirements are frozen. As a result, the waterfall model is not able t o accommodate uncertainties that may persist after a phase is completed. These u ncertainties may lead to delays and extended project schedules. Spiral Model:The spiral model is a software development process combining elemen ts of both design and prototyping-in-stages, in an effort to combine advantages of top-down and bottom-up concepts. Also known as the spiral lifecycle model (or spiral development), it is a systems development method (SDM) used in informati on technology (IT). This model of development combines the features of the proto typing and the waterfall model. The spiral model is intended for large, expensiv e and complicated projects. The spiral model, also known as the spiral lifecycle model, is a systems developm ent lifecycle (SDLC) model used in information technology (IT). This model of de velopment combines the features of the prototyping model and the waterfall model . The spiral model is favored for large, expensive, and complicated projects. The steps in the spiral model can be generalized as follows: 1. The new system requirements are defined in as much detail as possible. This u sually involves interviewing a number of users representing all the external or internal users and other aspects of the existing system. 2. A preliminary design is created for the new system. 3. A first prototype of the new system is constructed from the preliminary desig n. This is usually a scaled-down system, and represents an approximation of the characteristics of the final product. 4. A second prototype is evolved by a fourfold procedure: (1) evaluating the fir st prototype in terms of its strengths, weaknesses, and risks; (2) defining the requirements of the second prototype; (3) planning and designing the second prot otype; (4) constructing and testing the second prototype. 5. At the customer's option, the entire project can be aborted if the risk is de emed too great. Risk factors might involve development cost overruns, operatingcost miscalculation, or any other factor that could, in the customer's judgment, result in a less-than-satisfactory final product.

6. The existing prototype is evaluated in the same manner as was the previous pr ototype, and, if necessary, another prototype is developed from it according to the fourfold procedure outlined above. 7. The preceding steps are iterated until the customer is satisfied that the ref ined prototype represents the final product desired. 8. The final system is constructed, based on the refined prototype. 9. The final system is thoroughly evaluated and tested. Routine maintenance is c arried out on a continuing basis to prevent large-scale failures and to minimize downtime. Prototyping Model The prototyping model assumes that you do not have clear requirements at the beg inning of the project. Often, customers have a vague idea of the requirements in the form of objectives that they want the system to address. With the prototypi ng model, you build a simplified version of the system and seek feedback from th e parties who have a stake in the project. The next iteration incorporates the f eedback and improves on the requirements specification. The prototypes that are built during the iterations can be any of the following: * A simple user interface without any actual data processing logic * A few subsystems with functionality that is partially or completely implemente d * Existing components that demonstrate the functionality that will be incorporat ed into the system The prototyping model consists of the following steps. 1. Capture requirements. This step involves collecting the requirements over a p eriod of time as they become available. 2. Design the system. After capturing the requirements, a new design is made or an existing one is modified to address the new requirements. 3. Create or modify the prototype. A prototype is created or an existing prototy pe is modified based on the design from the previous step. 4. Assess based on feedback. The prototype is sent to the stakeholders for revie w. Based on their feedback, an impact analysis is conducted for the requirements , the design, and the prototype. The role of testing at this step is to ensure t hat customer feedback is incorporated in the next version of the prototype. 5. Refine the prototype. The prototype is refined based on the impact analysis c onducted in the previous step. 6. Implement the system. After the requirements are understood, the system is re written either from scratch or by reusing the prototypes. The testing effort con sists of the following: o Ensuring that the system meets the refined requirements o Code review o Unit testing o System testing The main advantage of the prototyping model is that it allows you to start with requirements that are not clearly defined. The main disadvantage of the prototyping model is that it can lead to poorly des igned systems. The prototypes are usually built without regard to how they might be used later, so attempts to reuse them may result in inefficient systems. Thi s model emphasizes refining the requirements based on customer feedback, rather than ensuring a better product through quick change based on test feedback. Incremental or Iterative Development The incremental, or iterative, development model breaks the project into small p arts. Each part is subjected to multiple iterations of the waterfall model. At t he end of each iteration, a new module is completed or an existing one is improv ed on, the module is integrated into the structure, and the structure is then te sted as a whole. For example, using the iterative development model, a project can be divided int o 12 one- to four-week iterations. The system is tested at the end of each itera tion, and the test feedback is immediately incorporated at the end of each test cycle. The time required for successive iterations can be reduced based on the e xperience gained from past iterations. The system grows by adding new functions

during the development portion of each iteration. Each cycle tackles a relativel y small set of requirements; therefore, testing evolves as the system evolves. I n contrast, in a classic waterfall life cycle, each phase (requirement analysis, system design, and so on) occurs once in the development cycle for the entire s et of system requirements. The main advantage of the iterative development model is that corrective actions can be taken at the end of each iteration. The corrective actions can be change s to the specification because of incorrect interpretation of the requirements, changes to the requirements themselves, and other design or code-related changes based on the system testing conducted at the end of each cycle. The main disadvantages of the iterative development model are as follows: * The communication overhead for the project team is significant, because each i teration involves giving feedback about deliverables, effort, timelines, and so on. * It is difficult to freeze requirements, and they may continue to change in lat er iterations because of increasing customer demands. As a result, more iteratio ns may be added to the project, leading to project delays and cost overruns. * The project requires a very efficient change control mechanism to manage chang es made to the system during each iteration. Contrast with Waterfall development: Waterfall development completes the project-wide work-products of each disciplin e in one step before moving on to the next discipline in the next step. Business value is delivered all at once, and only at the very end of the project. Backtr acking is possible in an iterative approach

You might also like