UNIT 1 Quality Aussurace , evaluated the processes that products product , identify weekness and conrrect, Methodology Processes

Quality control , ensure work product conforms to standards and requirment , identify defect, Testing is Quality control Internal Audit : review of operations The cost of Quality prevention costs  all spent before the product is actually built Appraisal costs ,  review products against requirement , after product is built Failure costs ,  repair products to make them meet requirement , after product delivery to user or on production To minimize production costs , The project team must focus on defect prevention. Applying the concepts of continuouse testing to the sys. Dev process can reduce the cost of quality Cost of quality + build cost = total product costs Software Quality Factors In defining the scope of testing, the risks factor become the basis or obj of testing 11 Quality Factor Product revision Product transition - Maintainability - Flexibility - Tesibility - Portability - Reusability - Inter operability - Correctness - Reliability - Efficiency - Integrity - Usability -

Product Operation

The Quality factor survey procedure

1. system characteristic : if the life cycle of sys is long , maintainability become a costcritical consideration 2. Consider life cycle implications [ dev, evaluation , post dev ] 3. Perform trade off among the tentative of sw quality factors 4. Identify most importance sw quality factor 5. Provide explanation for choices Doc. Rationale for decisions made during the first 3 step Quality Software The producer ‘s view of Q sw means MEETING REQuirement The user ‘s view ……………means FIT 4 Use The 2 SW quality GAP Producer view of Quality as specified Producer GAP Start Quality as delivery Customer GAP Customer view of Quality as expected

Software testing Helping to close the two Gaps Why we Test SW 1 Developers are not good testers 2 What’s a defect? There are 2 types of defect , process [ test plan standard is not follow ] and procedure [ test plan did not contain a statement of usability or specified in the requirements doc ] - Why does a Dev process produce defect? Process in control .. LESS special cause in process Process out of control there are special caused in process Software design defect Data defects

Finding defect

- variance from Specification - variance from what is designed 3 Reducing the frequency of defects in SW development Capability maturity model [ SEI’s maturity model ] : an algorithm that enabled the DOD to identify the more effective and efficient software development. 5 difference levels of maturity - Level 1 Ad hoc [ unstructured levels of performance ]  Discipline emphasized ( goal or objective , process design , training people in work processes ) - level 2 control [work process are designed , and uses validation and verification techniques to check compliance ]  Core Competencies emphasized ( defining and building information group’s core competencies ) - level 3 core competency  Measurement emphasized - level 4 predictable ( provide mgmt the dashboards to manage quantitatively )  World class emphasized ( the best that’s possible ) - level 5 innovation ( define innovation ways to achieve higher levels of performance) Factors affecting SW testing 1. People Relationship A tester needs to understand what to test, who performs what type of test, when testing should be performed, how to actually perform the test, and when to stop testing 2. Scope of testing ( extensiveness of the test process ) - sw development process identify the true needs of user or not - finding defect early - removing defect of all types prior to sw going to production state - identify weekness in sw development process 3.Life cycle testing Activity Life cycle phase Requirements Testing Activity Generate Functional test data Determine : verification , consistency of design with requirement *Both product requirement and validation requirement should be established Generate structure and functional test data Consistency with design - Test plan , test documentation is producted -


Program (build)

Test Instrallation Maintaince

Genereate structural and functional test data for programs - Test tools and techniques exist for this stage of system development, Code walkthrough and code inspection - automate perform Test application system - test report Place tested sys into production Eusure that the correct versions of the program are placed into production Modify and retest regression

4. Poorly developed test plans Test plan is a contract btw the software stakeholder and the tester Following 4 step to develop test plan 1. select and rant test objective 2. Identify the system development phase 3. Identify business risks associated with the system under development 4. place risk in the matrix Testing constraints - Limited schedule and budget Problem Failing to define testing obj Testing at wrong phase in the life cycle Using ineffective test techniques Optimum test : the point where the cost of testing on longer exceeds the value received from the defects uncovered - Lacking or poorly written requirements Test objective can be easily derived from using system requirement documentation. As final step the test team should perform quality control. Using a checklist or worksheet to ensure that the process to set test objective was followed, or reviewing them with the system users - Change in Technology The following technological developments are causing org to revise their approach to testing : Integration, System chains ( problem in one can cascade in to affect others) , Domino effect, reliance on electronic evidence , Multiple users

- Limited tester skills Life cycle Testing - forming a test team , they must use structured methodologies when testing Define Software requirement Step 1 assess development plan and status Step 2 develop the test plan Step 3 test software requirement [verification] Step 4 Test software design [verification] Step 5 program phase testing [validation & unit test & integration testing] Step 6 Execute and record result [validation & integration testing & system testing] Step 7 Acceptance test [validation , acceptance testing] Step 8 report test results Step 9 the software installation Step 10 test software change Step 11 Evaluate test effectiveness

Build Software

Install sotware Operation and maintain software

Step 1 : Test Matrices [ interrelationship btw functional events and tests ] Defines the conditions that must be tested , it should be prepare during requirement phase Economi c events Give pay Action Initial events Complet e from Increase approve d Data entry Form storag e Store form Data entry validatio n Number? Logical validatio n Employe e exist? Updat e Chang e pay Audit / report Confir m report history

Approve Verify amoun t

- Cascading Test matrices Including only 1 functional event on a matrix , so it can be re use , and show relationship on new functional event The creation of the cascading test matrix reinforces the interrelationship of functional events and enable that aspect of sys to be better tested Independent testing Test resource will have a reporting independent from the group of designing and developing the appl.

And individual must : thoroughly understand the system, technology of the system, posses creativity business knowledge, understand development methodology

Tester’s workbench What’s a process? There are 2 way to virtually portray a process 1. PDCA conceptual view of process Plan  do  check  act 2. Workbench view of process Objective – States why the process exists or its purpose People skill – the role, responsibility Input – the entrance criteria Procedure – describe how work must to done Deliverable – any product/service produced by process Standard – measures used to evaluated products and identify nonconformance Tools - Aids to performing the process

INPUT  Do procedure  Check procedure  Deliverable


Policy  Standard

1. Input products [ unit test specifications) are given to the tester 2. work is performs , product such as defect list 3. work is checked to ensure the unit test meets spec and standards 4. if check fine no problem , can be next workbench such as integration test 5. if check find problem , send product back to rework Level of testing Verification testing > unit test > integration Testing > system testing > User acceptance test

Verification VS Validation Verification ensure that the system complies with ORG’s standard [ Did we build the system right? ] require several type of review Validation ensure that they system operate according to plan [ Did we build the right system ] execute the real-life function Functional & Structure testing Functional testing : Black Box testing , no knowledge of the internal logic of the system is used to develop test case Structure testing : White Box testing , verification predominantly To effectively test system, you need to use both methods Static VS Dynamic Testing Most verification techniques are static tests [ using software doc. Logic flow , requirement review, Not execute ] Most validation tests are dynamic test [ execute state (of code ) to perform the tests ] The V concept of testing

Business need Define Req Design

Acceptance test system test Integration test

Build Static

Unit test Dynamic

11 Step software testing process example ( see above) Testing Techniques 1. Structure VS Functional Technique Categories Functional testing ensure that requirement are properly satisfied by application sys. [ concern the result of processing ] Structural testing ensure sufficient testing of implementation of a function [ structure testing evaluates that all aspects of the structure have tested ]

Structural testing Technique categories 1. Stress - Determine system performs with expected volumes.

Online stress test by enter transactions at normal or abnormal pace Batch system can be tested by large input batches Stress testing should be used when there is uncertainly regarding the amount of work the application system can handle without failing // Stress testing is most common with online application 2. Execution – system achieves desired level of proficiency (transaction turnaround time ) Using hardware and software monitors Calculating turnaround time on transactions process Execution testing should to used early in the developmental process 3. Recovery - system can be returned to an operational status after a failure - preserve backup data > store back up data in secure > doc. Recovery procedure > assign and train recovery personnel > develop recovery tools and make available 1st the Procedure , methods , tools and assessed 2nd introduce in to the system and the ability to recovery tested Recovery testing should be performed whenever the user of the application states 4. Operation - system can be executed in a normal operation status The operation procedures and staff can properly execute the application Evaluates both the process and the execution of the process Operation testing should occur prior to placing any application into a production status 5. Compliance (to process) - system is developed in accordance with standards and procedures Compliance testing require that the prepared document or program is compared to the standard for that particular program or document [ inspecting process] 6. Security system is protected in accordance with importance to organization 1st step : identification of the security risk and potential loss If the risk is high , specialized help should be acquired in conducting the security tests Physical security  people Logical security -> computer processing

Security testing should be used when the Inf. And or asset protected by the appl system are of significant value to org. Functional system testing Technique categories 1. Requirement testing techniques: System performs as specified Req. testing is primarily performed though the creation of test conditions and functional checklist Functional testing is more effective when the test conditions are created directly from user requirement Create a test matrix to prove that sys requirement as documented are the requirements desired by the user Every application should be requirement tested. 2. Regression: Verifies that anything unchanged still performs correctly Regression testing should be used when there is a high risk that new changes affect unchanged areas of the application system 3. Error handling: Error can be prevented or detected and then corrected Brainstorm what might go wrong with the application. Require u to think negatively and conduct such tests Error testing should occur throughout the system development life cycle. 4. Manual support: The people-computer interaction works Verify that the manual support procedure and documented and complete 5. Intersystem: Data is correctly passed from system to system Intersystem testing should be conducted whenever there is a change in parameters between application system 6. Control : Control reduce system risk to an acceptable level To ensure that the mechanisms that oversee the proper functioning of an application system work One method that can be use in testing controls is to develop a risk matrix 7. Parallel : Old system and new system are run and the results compared to detect unplanned difference Parallel testing requires that the same input data be run through two versions of the same application

Example of specific testing techniques 1. white-box testing, statement coverage, decision coverage , condition coverage , decision/condition coverage ,multiple condition coverage 2. black-box testing : focus on testing the function of program against its specification - Equivalence partitioning < undertaking exhaustive , between , > larger - Boundary Analysis  low , on and upper boundary - Error guessing 3. Incremental testing test the interfaces between unit-tested program as well as between system components Top down Bottom up 4. Thread testing Demonstrates key functional capabilities by testing a string of units that accomplish a specific function in the application, Thread testing and incremental testing are usually utilized together 5. Requirement tracing Trace requirement, both functional and non-functional To ensure that nothing has been lost in the translation of analysis deliverable. 6. Desk checking and peer review Desk checking is used more as debugging technique than a testing technique 7. walkthroughs , inspections and reviews -Inspection involves a step-by-step reading of the product, with each step checked against a predetermined list of criteria -Walkthroughs provide test data and leads the team through a manual simulation of the system Design review and audit are performed as follow - System requirement review [ test planning and test doc are begun ] - System design review [tool required for verification support are identified at this stage ] - preliminary design review [ all test plan , doc review at this stage ] - Final design review [ the complete design are review include changes in design] - Final review 8. proof of correctness techniques These techniques usually consist of validating the consistency of an output “assertion” with respect to a program (requirement) and input 9. simulation Verification is performed by determining if the model of the software behaves as expected on models of the computational and external environments using simulation. 10.Boundary value analysis Test data should be chosen which lie both inside each input class and at the boundary of each class, 11. Error Guessing and special value analysis 12 Cause effect graphic 13. Design-Base Functional testing 14. Coverage-Based testing 15. Complexity-Base testing 16. Statistical Analyses and error seeding 17. Mutation Analysis 18. Flow Analysis 19. Symbolic Execution

Combining specific testing Techniques

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master Your Semester with a Special Offer from Scribd & The New York Times

Cancel anytime.