Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Save to My Library
Look up keyword or section
Like this
14Activity

Table Of Contents

0 of .
Results for:
No results containing your search query
P. 1
ISEB Course Study

ISEB Course Study

Ratings:

4.5

(4)
|Views: 666 |Likes:
Published by Saurabh
Study material for ISTQB Foundation Level Examination
Study material for ISTQB Foundation Level Examination

More info:

Published by: Saurabh on Jul 16, 2008
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as DOC, PDF, TXT or read online from Scribd
See more
See less

09/06/2012

pdf

text

original

 
 A foundation Course in Software Testing 
Module A: Fundamentals of Testing
:
 1.Why is Testing Necessary
Testing is necessary because the existence of faults in software is inevitable. Beyond fault-detection, the modern view of testingholds that fault-prevention (e.g. early fault detection/removal from requirements, designs etc. through static tests) is at least asimportant as detecting faults in software by executing dynamic tests.
1.1.What are Errors, faults, failures, and Reliability
1.1.1.An Error is…
 A human action producing an incorrect result 
The error is the activity undertaken by an analyst, designer, developer, or tester whose outcome is a fault in thedeliverable being produced.
When programmers make errors, they introduce faults to program code
We usually think of programmers when we mention errors, but any person involved in the development activities canmake the error, which injects a fault into a deliverable.
1.1.2.
A Fault is…
 A manifestation of human error in software
A fault in software is caused by an unintentional action by someone building a deliverable. We normally think of programmers when we talk about software faults and human error. Human error causes faults in any project deliverable.Only faults in software cause software to fail. This is the most familiar situation.
Faults may be caused by requirements, design or coding errors
All software development activities are prone to error. Faults may occur in all software deliverables when they are firstbeing written or when they are being maintained.
Software faults are static - they are characteristics of the code they exist in
When we test software, it is easy to believe that the faults in the software move. Software faults are static. Onceinjected into the software, they will remain there until exposed by a test and fixed.
1.1.3.A failure is…
 A deviation of the software from its expected delivery or service
Software fails when it behaves in a different way that we expect or require. If we use the software properly and enter data correctly into the software but it behaves in an unexpected way, we say it fails. Software faults cause softwarefailures when the program is executed with a set of inputs that expose the fault.A failure occurs when software does the 'wrong' thingWe can say that if the software does the wrong thing, then the software has failed. This is a judgement made by theuser or tester. You cannot tell whether software fails unless you know how the software is meant to behave. This mightbe explicitly stated in requirements or you might have a sensible expectation that the software should not 'crash'.
1.1.4.Reliability is…
The probability that software will not cause the failure of a system for a specified time under specified conditions
It is usually easier to consider reliability from the point of view of a poor product. One could say that an unreliableproduct fails often and without warning and lets its users down. However, this is an incomplete view. If a product failsregularly, but the users are unaffected, the product may still be deemed reliable. If a product fails only very rarely, but itfails without warning and brings catastrophe, then it might be deemed unreliable.
Software with faults may be reliable, if the faults are in code that is rarely used 
If software has faults it might be reliable because the faulty parts of the software are rarely or never used - so it does notfail. A legacy system may have hundreds or thousands of known faults, but these exist in parts of the system of lowcriticality so the system may still be deemed reliable by its users.
1.2.Why do we test?
1.2.1.Some informal reasons
To ensure that a system does what it is supposed to do
To assess the quality of a system
To demonstrate to the user that a system conforms to requirements
To learn what a system does or how it behaves.
1.2.2.A technicians view 
 
To find programming mistakes
To make sure the program doesn't crash the system
1.3.Error and how do they occu
1.3.1.Imprecise capture of requirements
Imprecision in requirements are the most expensive faults we encounter. Imprecision takes the form of incompleteness,inconsistencies, lack of clarity, ambiguity etc. Faults in requirements are inevitable, however, because requirementsdefinition is a labour-intensive and error-prone process.
1.3.2.Users cannot express their requirements unambiguously 
When a business analyst interviews a business user, it is common for the user to have difficulty expressingrequirements because their business is ambiguous. The normal daily workload of most people rarely fits into a perfectlyclear set of situations. Very often, people need to accommodate exceptions to business rules and base decisions on gutfeel and precedents which may be long standing (but undocumented) or make a decision 'on the fly'. Many of the rulesrequired are simply not defined, or documented anywhere.
1.3.3.Users cannot express their requirements completely 
It is unreasonable to expect the business user to be able to identify all requirements. Many of the detailed rules thatdefine what the system must do are not written down. They may vary across departments. Under any circumstance, theuser being interviewed may not have experience of all the situations within the scope of the system.
1.3.4.Developers do not fully understand the business.
Few business analysts, and very few developers have direct experience of the business process that a new system is tosupport. It is unreasonable to expect the business analyst to have enough skills to question the completeness or correctness of a requirement. Underpinning all this is the belief that users and analysts talk the same language in thefirst place, and can communicate.
1.4.Cost of a single fault
We know that all software has faults before we test it. Some faults have a catastrophic effect but we also know that not allfaults are disastrous and many are hardly noticeable.
1.4.1.Programmer errors may cause faults which are never noticed 
It is clear that not every fault in software is serious. We have all encountered problems with software that causes usgreat alarm or concern. But we have also encountered faults for which there is a workaround, or which are obvious, butof negligible importance. For example, a spelling mistake on a user screen, which our customers never see, which hasno effect on functionality may be deemed 'cosmetic'. Some cosmetic faults are trivial. However, in some circumstances,cosmetic may also mean serious. What might our customers think if we spelt quality incorrectly on our Web site homepage?
1.4.2.If we are concerned about failures, we must test more.
If a failure of a certain type would have serious consequences, we need to test the software to ensure it doesn't fail inthis way. The principle is that where the risk of software failure is high, we must apply more test effort. There is a straighttrade off between the cost of testing and the potential cost of failure.
1.5.Exhaustive testing
1.5.1.Exhaustive testing of all program paths is usually impossible
Exhaustive path testing would involve exercising the software through every possible program path. However, even'simple' programs have an extremely large number of paths. Every decision in code with two outcomes, effectivelydoubles the number of program paths. A 100-statement program might have twenty decisions in it so might have1,048,576 paths. Such a program would rightly be regarded as trivial compared to real systems that have manythousand or millions of statements. Although the number of paths may not be infinite, we can never hope to test allpaths in real systems.
1.5.2.Exhaustive testing of all inputs is also impossible
If we disregard the internals of the system and approach the testing from the point of view of all possible inputs andtesting these, we hit a similar barrier. We can never hope to test all the infinite number of inputs to real systems.1.5.3.If we could do exhaustive testing, most tests would be duplicates that tell us nothingEven if we used a tool to execute millions of tests, we would expect that the majority of the tests would be duplicates andthey would prove nothing. Consequently, test case selection (or design) must focus on selecting the most important or useful tests from the infinite number possible.
1.6.
Effectiveness and efficiency
A test that exercises the software in ways that we know will work proves nothingWe know that if we run the same test twice we learn very little second time round. If we know before we run a test, that it willalmost certainly work, we learn nothing. If we prepare a test that explores a new piece of functionality or a new situation, we
 
know that if the test passes we will learn something new - we have evidence that something works. If we test for faults incode and we try to find faults in many places, we increase our knowledge about the quality of the software. If we find faults,we can fix them. If we do not find faults, our confidence in the software increases.
Effective tests
When we prepare a test, we should have some view on the type of faults we are trying to detect. If we postulate a fault andlook for that, it is likely we will be more effective.In other words, tests that are designed to catch specific faults are more likely to find faults and are therefore more effective.
Efficient tests
If we postulate a fault and prepare a test to detect that, we usually have a choice of tests. We should select the test that hasthe best chance of finding the fault. Sometimes, a single test could detect several faults at once. Efficient tests are thosethat have the best chance of detecting a fault.
1.7.Risks help us to identify what to test
The principle here is that we look for the most significant and likely risks and use these to identify and prioritise our tests.We identify the most dangerous risks of the systemRisks drive our testing. The more typical risks are:(1) Gaps in functionality may cost users their time. An obvious risk is that we may not have built all the required featuresof the system. Some gaps may not be important, but others may badly undermine the acceptability of the system. For example, if a system allows customer details to be created, but never amended, then this would be a serious problem, if customers moved location regularly, for example.(2) Poor design may make software hard to use. For some applications, ease of use is critical. For example, on a website used to take orders from household customers, we can be sure that few have had training in the use of the Net or more importantly, our web site. So, the web site MUST be easy to use.(3) Incorrect calculations may cost us money. If we use software to calculate balances for customer bank accounts, our customers would be very sensitive to the problem of incorrect calculations. Consequently, tests of such software wouldbe very high in our priorities.(4) Software failure may cost our customers money. If we write software and our customers use that software to, say,manage their own bank accounts then, again, they would be very sensitive to incorrect calculations so we should of course test such software thoroughly.(5) Wrong software decision may cost a life. If we write software that manages the control surfaces of an airliner, wewould be sure to test such software as rigorously as we could as the consequences of failure could be loss of life andinjury.We want to design tests to ensure we have eliminated or minimised these risks.We use testing to address risk in two ways:Firstly we aim to detect the faults that cause the risks to occur. If we can detect these faults, they can be fixed, retested andthe risk is eliminated or at least reduced.Secondly, if we can measure the quality of the product by testing and fault detection we will have gained an understandingof the risks of implementation, and be better able to decide whether to release the system or not.
1.8.Risks help us to determine how much we test
We can evaluate risks and prioritise themNormally, we would constitute a brainstorming meeting, attended by the business and technical experts. From this weidentify the main risks and prioritise them as to which are most likely to occur and which will have the greatest impact.What risks conceivably exist? These might be derived from past or current experience. Which are probable, so we reallyought to consider them?The business experts need to assess the potential impact of each risk in turn. The technical experts need to assess thepotential impact of each risk. If the technical risk can be translated into a business risk, the business expert can then assigna level of impact.For each risk in turn, we identify the tests that are most appropriate. That is, for each risk, we select system features and/or test conditions that will demonstrate that a particular fault that causes the risk is not present or it exposes the fault so therisk can be reduced.We never have enough time to test everything so...The inventory of risks are prioritised and used to steer decision making on the tests that are to be prepared.We test more where the risk of failure is higher. Tests that address the most important risks will be prioritised higher.

Activity (14)

You've already reviewed this. Edit your review.
1 hundred reads
1 thousand reads
leoncioflores liked this
narcisap liked this
cory7june liked this
mehrnaz56 liked this
cyrus_2g liked this

You're Reading a Free Preview

Download
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->