Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Standard view
Full view
of .
Save to My Library
Look up keyword
Like this
0 of .
Results for:
No results containing your search query
P. 1
Testing Times in Computer Validation

Testing Times in Computer Validation

Ratings: (0)|Views: 130 |Likes:
Published by Fred

More info:

Published by: Fred on Jul 03, 2009
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less





February 2003 • Volume 9, Number 2
ith so much written andtalked about the diff erent phases of testing, it iseasy to become confused about whatconstitutes the ‘correct’amount of testing for any specific application orimplementation. This article at-tempts to simplify some of the manytesting terms, and provide somepractical guidance in performing ef-fective testing. The advice offered isfocused on process control/automatedmanufacturing systems, but is also ap-plicable to other computerized sys-tems. As with all aspects of computervalidation, a basic knowledge of thetheory must be understood beforetesting can be effective. Although this article is a personal viewpoint, Ihave included several document ref-erences for further guidance on thesub ject. The fundamental objective of test-ing is to prove some assertion. This sounds so obviousas to be silly, but the biggest cause of ineffective test-ing I have seen is the loss of clarity of this objective.People lose focus about what they are trying to prove,or only superficially test assertions. It is always easierto test what is expected than what is not. The reasonwhy users can find problems in systems that are notfound during defined testing times is that they are notrestricted to a defined set of test data to use.We need to ensure that we are clear on what weare trying to prove, and that our test data truly cov-ers all aspects of that assertion.The next section describes howparagraphs on testing are generallywritten and expanded upon in pro- ject documentation.At the highest level ‘Overall TestOb jectivestage, the purpose is al- most always correct (e.g., show thatsystem X works as described in the re-quirements specification). At thisstage, objectives are very generic, soanything that conveys the general ideaof implementing a quality system canbe construed as correct.At the next level down, the ‘testcasesstage, test objectives are again fairly generic (e.g., show that theuser security system works as ex-pected). At this stage, errors are usu-ally not found in what is written, butrather what is not. Acommon causefor test omissions is to write thecases based on the programs to test,rather than the functions to test. This then leads to an almost one-to-one mapping betweenprograms and tests, and makes the testing easier toperform from a testing point of view. Unfortunately,from the project point of view, the idea is not to makethe testing easier, but to ensure as much effective test-ing of the critical areas of the system is tested in thetime provided.From the test cases, the test scripts are written.These describe the actual actions to perform, and theexpected results that should be observed to prove theintent of the case. There is a real skill required to be
Testing Times inComputer Validation
Ian Lucas
SeerPharma Pty Ltd.
Thefundamentalobjective of testing is toprove someassertion.Thissounds soobvious as tobe silly,but thebiggest causeof ineffectivetesting I haveseen is the lossof clarity of thisobjective.
Journal of Validation TechnologyIan Lucas
able to write these, so that the correct emphasis isplaced on the intent of the case, rather than the periph-eral actions required to set-up the case.There is also a skill required in asking the tester toproduce, annotate, and verify hard copy attachmentsthat form ‘defendable proof’for that case (without pro-ducing attachments simply for the sake of documenta-tion). There will be more information on this topic inthe
 Journal of Validation Technolog y
at a later date.Even with a clarity of focus on the test objectivesfor a single test exercise, it is easy to lose focus as towhere that exercise sits in relation to the total project.This can be due to the high number of testing terms,and where they all fit in relation to each other. It is alsoattributable to the fact that many people’s definitionsof these terms are different than other individuals.Included are my definitions and general testingphilosophies below to assist with the understanding of the examples supplied.As a general rule, I’m an advocate of the ‘X’model (GAMP3)
of system development and testing (see
Figure 1
). This takes into account that systems are de-veloped and tested (to varying degrees) in conditionsaway from their final environment.The overall testing exercise therefore covers both en-vironments, with common sense and experience beingused to determine the amount of testing being per-formed in each sub phase for each individual project.The aim is to minimize the duplication of testingthroughout the phases, and to use each sub phase tobuild upon the previous phases to provide compre-hensive testing coverage.
(Note that in the interest of not over complicatingthis paper, I have not added the documents created onthe left side of this diagram. Varying requirement spec- i fications and designs can be on both the USER and SUPPLIER sides, depending upon whether a prouc or custom solution is being develo pe.)
The Supplier Testing
Unit Testing
For systems where new code is developed, ‘UnitTestingis used to provide structural (also referred to as ‘white box’) testing of this code. This involveschecking that the code has been written to defined cod-ingstandards, that the code reflects the design (usual-ly the pseudo code process specifications), and varyingdegrees of stand-alone module performance checks.These need to fit the criticality of the module, andinvolve input/output parameter checking andbranch/code coverage. Note that for this testing thereis an assumption that the design is correct as approvedfor development (a costly assumption in some cases).As this testing is usually performed by anotherdeveloper, emphasis is placed on the technical aspects, rather than the functional aspects. However, it must benoted that additional code cannot magically appear inthe executable program after the source code is writ-ten. It is therefore, imperative, that the coder add adetailed level of implicit ‘safety-net’(my definition)code, as well as the explicit ‘requirements’code.
Figure 1
System Development and Testing
USER (Final Environment)Performance Qualification/User Acceptance TestingOperational QualificationInstallation Qualification/InstallationTestingSupplier (Development Environment)Customer Acceptance TestingSystem TestingUnit Testing
February 2003 • Volume 9, Number 2Ian Lucas
Without going into an overly technical explana-tion, please refer to
Figure 2
, which is the followingtwo pseudo-code listings. Both describe the processfor summing a set of numbers, working out their aver-age, and then saving the information in a data table.The design pseudo-code may not explicitly havethe bold lines in the Listing Two described, but it is im-portant that the code review pick up that a level of thissaety-netcode has been written. For the above example, a typical functional test maybe to run two or three sets of standard data through thealgorithm, and check the expected result against theone in the database. Note that both sets of codes wouldpass this test. It would only be when non-numeric data,no data, or when a problem with the database occurred,that the code generated from Listing Two would showits robustness over Listing One.The above example is a little contrived, but thepoint is that in a large system many of these ‘safety-netchecks will not be tested (as time is given to test- ing the expected paths). The robustness of the systemonly comes out over time, as inadvertently, unexpect-ed data or situations are encountered.This highlights the importance of unit testing, andauditing the developers of critical software products toensure that they have performed these code reviews ontheir products. As mentioned before, functional testingcannot put this safety net back in – only show that it isnot there.
Unit Integration Testing – A Link Between Unit and System Testing
For complex systems, a level of testing of some of the integrated software modules may be warranted.This may involve writing software shell ‘driverpro-grams, and calling up several modules to test flow andcontrol between these modules.Note that this control will be tested again duringsystem testing, and should only be performed if the jump between a single unit test and full functional test-ing is considered a substantial risk.
System Testing
The supplier ‘system testing’is where the systemis functionally put through its paces. As well as thestandard expected operational tests, other stress testsshould be devised. These include:Power Tests – think about the most inopportunetimes to lose power on the system and try these• Time – if applicable, wind back and forward thetime on the server and client Personal Computers(PCs) to simulate daylight savings time• Status Transitions – look for processes that use sta-tus’to determine operation, and check that thesecannot deadlock the process. I have seen severalprocesses where a product/batch had to be in sta-tus Ato initiate the process, moved to status B dur-ing the process, then finished with status C. Theproblem was that if the process failed in process B,there was no way to restart or finish that processon that product/batch.Amajor difference between system testing (also readOQ) and customer/user acceptance testing (also readPQ) is that system testing is not just trying to show thesystem does what it is designed to do from the specifi-cations – it is also trying to find errors.There is a fundamental mindset change betweenthese two objectives that has not been captured in mostguideline reference material. Take most V-model dia-grams (see
Figure 4
) that associate requirements anddesign documents to the different testing phases. The
Figure 2
Pseudo-Code Listings Describingthe Sum Process and Averages
Listing Two
Set Sum = 0, Count = 0While not exit selectedRead Number
If input not numeric rt erro r,and do  not add  / 
Add Number to sumIncrement countEnd while
If count = 0 rt message, do not calculate total or 
Set average = Sum/countStore average in data table
If store fails,Report error 
Listing One
Set Sum = 0, Count = 0While not exit selectedRead NumberAdd Number to sumIncrement countEnd whileSet average = Sum/countStore average in data table

Activity (4)

You've already reviewed this. Edit your review.
1 hundred reads
Yunntzer Lu liked this
hbg675 liked this
ilu5361 liked this

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->