You are on page 1of 149
Guidelines for quality management in soil and plant laboratories ERRATA P. 112, 14th line from top, left column: delete dp. . 125, right column, bottom line fallen away. It should read: Guidelines for quality sous management in soil and aad plant laboratories 74 Compiled by LP. van Reeuwijk Associate Professor Head of Laboratory International Soil Reference and Information Centre (ISRIC) Wageningen, the Netherlands with a contribution by V.G. Houba Associate Professor, Director of Wageningen Evaluating Programmes for Analytical Laboratories (WEPAL) Department of Soil Science and Plant Nutrition Wageningen Agricultural University Wageningen, the Netherlands Food and Agriculture Organization ot the United Nations p. 108, right column, bottom line fallen away. It should read: VR (sce 6.4) as indicated in the fourth box on the right, ..” . . can absorb substances, particularly ammonia. Rome, 1998 ‘The designations employed and the presentation of materia in this ‘ulation do net imply the expression of any opinion whatsoever on the part of the Food and Agricutture Organization of the United "Nations ofthe international So Reference and information Contre Cconcering the logal status of ay court, terror. cy oF area or of Its autores, or concerning the dalmitaon of its frontiers or ‘boundaries, m-20 ISBN 92-5-104065-6 [Alright oserved. No part ofthis publication may be reproduced, sorad in a ‘teoval ystems, oF tranamttedin ay form or by ayy maans,elactronic, mocha ‘ia, photocopying or there, whut the prix permission of Be copy ner Applications for such pernasion, wa statement of the purpose at extent tho reproduction, should be addressed to te Dretoy,nfrmation Division, Food and ‘Agrcuture Organization of the Unted Nations, Vise dale Tene di Caracalla, 00100 Romo, tay. (© FAO and ISAIC 1998 FOREWORD Quality assurance in analytical laboratories, irrespective of the discipline, is considered matter of course. However, many laboratories find it very difficult to effectively and systematically implement the necessary procedures of quality assurance, and implementation leaves much to be desired in many places. Particularly, laboratories with marginal budgets, or smaller research laboratories working without much competition, often do not have the necessary resources and incentives to engage in a comprehensive effort as done by laboratories seeking accreditation, Proper training and reftesher courses may be neglected or, all too often, properly trained staff resign to take up a better position elsewhere. Much neglected aspects of laboratory work also include keeping of full, systematic records and drafting and implementation of proper operating procedures and protocols. To assist soil and plant laboratories with the mentioned constraints, ISRIC, with the support of the Food and Agriculture Organization of the United Nations (FAO), developed practical guidelines for effective quality management. Emphasis was placed on achieving an improvement of performance by adopting a limited number of relatively simple rules and inexpensive measures based on the principles of Good Laboratory Practice. The many examples and model documents included in these guidelines should facilitate the adaptation and adoption of specific procedures and documents, | hope that these guidelines will find their way and prove useful to many laboratories. Robert Brinkman Director Land and Water Development Division FAO 2.6 Relativizaton as encouragement ‘SOBs FE 002 _ Administration of Standard peetstns Procedures PROT 005 _ The use of Laboratory Notebooks . “Model page of Laboratory Notebook : 7 32 Sone a i at : = ee oeeeee eevee eeeeeeeeTT mB t-te, SN NE a pee nee 3.6 Education and training of staff oo... sees eee eee eee eee 14 2.7 Antroduction of new staff 1s Box: Example of concise description of function and aims of an institute............ 15 Poa aL Job Description Form_ Material protegido por derechos de autor iv Quality Management in Laboratories fc Form for accepting order for analysis. Material protegido por derechos de autor 724 Ind Saris Seas = i 72.5” Measuring a batch 22... ais ‘SOBs VAL _ 09-2 Validation of CEC determination with NH,OAc .. 100 = 102 ‘METH 006 Determination of nitrogen in soil with micro-Kjeldahl 3. ing a Mean Chari. z z 8.3.3 Control Chart of the Range of Duplicates (Range Chart) Material protegido por derechos de autor vi Quality Management in Laboratories 23.34 Rechart of Test Samples 834 isi ic cet ion of a Ci a2 Cokes a emo ctpact roma M2 8.4.3 Stability 2 844 Homogeneity Hz $3 Complains... 14 8.6 Troubleshooting . 14 9 EXTERNAL QUALITY CONTROL OF DATA y LP. van Recuwij MR is ceca 2 Introduction. 421 921 Sin value ea ae Sacenes sas Hel ‘92.2 Replicate data single value check 212i ‘Trouble-shooting ... 2... i 9.5 Organization of inrsbormor te programmes 126 26 _Quality a3 eeeeeee 126 Box: We Pro for Ani APPENDIX 4 Code of Reference Materials (COMAR) ...... # 133 Ap np tcc recente tee (MNERATORR cose cance ecu eer OE ee Material protegido por derechos de autor ‘Analytical data produced by different soil and plant laboratories appear to show an often distressingly large variability. Soil parameters are, amongst others, used for soil classification and correlation, land evaluation, soil quality, fertility, and pollution assessment. Erroneous data may lead to very costly mistakes by administrators and. other authorities and also hamper technology transfer. In an attempt to reduce the observed variability, laboratory crass-checking programmes conducted in the recent past, notably ISRIC’s Laboratory Methods and Data Exchange Programme (LABEX), have indicated that this phenom- enon can, amongst others, to a large extent be attributed to essentially two causes: 1, High inaccuracy (bias) through lack of standardization of analytical procedures. 2. High imprecision (scattering) caused by lack of within- laboratory consistency, Efforts to standardize soil analytical procedures on an international level are at present being undertaken by working groups of ISO (Intemational Standardization Organization, Technical Committee 190). The solution of within-laboratory problems has been left to the initiatives of individual institutes, Therefore, particularly commercial laboratories whose success is directly related to the quality of their product, often have a lead here. It is generally accepted that the quality of the output of laboratories strongly depends on the quality of the ‘organization of the work, not only at the level of execu- tion of the analysis but also at management level (‘good tree, good fruits’). To achieve optimal performance, the concept of "Good Laboratory Practice” (GLP) was developed and has been practised now for quite some time by @ number of categories of laboratories where the quality of the work is of vital importance, eg. in the fields of food, medicine, toxicology, pollution, ete Implementation of GLP in soil and plant analytical laboratories in a consequent manner has not been done on «large scale yet, particularly not in developing countries, but it seems to be the only way to significantly and structurally improve the laboratory performance. It's somewhat unfortunate and confusing that GLP as a descriptive general term is in fact, by origin, a rather strict Set of regulations for test laboratories. In the context of the present Guidelines the term "Proper Laboratory Practice” would perhaps be more appropriate. Of late, the term "Quality Management” has come into fashion and it is felt that this term fairly well covers the intention of the present book. In many countries governments are introducing the rule that orders for environmental and ecological analyses should only be given to laboratories that are aceredited for this type of work. For accreditation, consequent Quality ‘Management is an essential aspect. It is, however, not the purpose ofthis paper to guide laboratories to accreditation {at best it could be a start). Accreditation is a ponderous PREFACE and expensive major undertaking often involving the hiring of specialist consultants and the employment of (extra) personnel trained in laboratory organization and quality assurance. The present objective is rather to introduce a number of basic measures in the laboratory which do not necessarily require a substantial input of capital but may involve a change in attitude and practice of all laboratory personnel, On the other hand, where costs are involved the justification can perhaps be found in an analogy with advertising: ‘advertising is expensive, not advertising is more expensive’. When reading the protocols, operating procedures. and other instructions for Good laboratory Practice as part of ‘Quality Management, one may feel that 2 good many of them are already in practice in one way or the other. ‘Therefore, making an inventory of existing documents should always be a first step. In many cases, however, ‘these concern half-way measures, not properly written up (or filed somewhere and never seen) and the interpretation ‘of which varies from person to person and from time to time. In many cases, notes and calculations are made on ‘odd pieces of paper which happen to lie around. Rejected analytical results or readings are thrown away and mal- functioning apparatus is left to colleagues without notifi- cation. Good Laboratory Practice tries to avoid these engrained habits by consequent documentation of all relevant actions ("what isn’t written, isn’t done’), Cynics sometimes tauntingly refer to GLP as "Generates Lots of Paper’. Obviously, documentation can be overdone and then it may be counterproductive. In the present manual, too much documentation is consciously avoided. A workable approach is preferred to a fully elaborated procedure involving a drastic change in prevailing practice Which may cause evading tactics or an attitude of reject- ion, Stricter or more comprehensive measures can always be implemented later when the need arises, A step-by-step approach should in any case be practised with the implementation of all new Quality Management rules and measures. There is a limit to what personnel and a laboratory as a whole can handle, absorb or digest in a limited span of time with a certain budget Success depends on the fulfilment of three major preconditions: 1. The directorate of the institute supports (or rather demands) the improvement. 2. The necessary means and time are made available, 3. Participation of all personnel who should be made aware and be involved from the outset. The first two items are the responsibility of the manage- ‘ment ofthe institute the third is mainly (but not only} the ‘concer of the laboratory staff. The third condition under- scores once again the importance of the cooperation, participation, invelvement and contribution of all labora- tory staff throughout the implementation of consequent Quality Management. Since this manual is aimed at improving the performance of a laboratory, the activities involved focus on the term “quality”. The quality of the product, in the present case analytical results, should obviously be acceptable. To establish whether the product fulfils the quality require~ iments these have to be defined first. Only afte that it can be decided if the product is satisfactory or if and what ‘corrective actions need to be taken 1.1 What is Quality? The term “quality” has a relative meaning. This is expressed by the ISO definition: "The totality of features ‘and characteristics of a product or service that bear on its ability to satisfy stated or implied needs”. In simpler words, one can say that a product has good quality when it ‘complies with the requirements specified by the client” When projected on analytical work, quality can be defined as “delivery of reliable information within an agreed span of time under agreed conditions, at agreed costs, and with necessary aftercare". The “agreed conditions" should include a specification as to the precision and accuracy of the data which is directly related to "fitness of use” and which may differ for different applications. Yet, in many cases the reliability of data is not questioned and the request for specifications omitted. Many laboratories work according to established methods and procedures which are not readily changed and have inherent default specifi- cations. Moreover, not all future uses of the data and reports can be foreseen so that specifications about required precision and accuracy cannot even be given. Consequently, this aspect of quality is usually left to the discretion of the laboratory. However, all 100 often the embarrassing situation exists that a laboratory cannot evaluate and account for its quality simply because the necessary documentation is lacking In the ensuing discussions numerous activities aimed at maintaining the production of quality are dealt with. In principle, three levels of organization of these activities can be distinguished. From the top down these levels are: 1. Quality Management (QM) . Quality Assurance (QA) 3. Quality Control (QC) 1.2 Quality Management Quality Management is the assembly and management of all activities aimed at the production of quality by organ- izations of various kinds. In the present case this implies the introduction and proper running of a "Quality System” in laboratories. A statement of objectives and policy t0 produce quality should be made for the organization or department concemed (by the institute's directorate). This Chapter 1 INTRODUCTION statement also identifies the intemal organization and responsibilities for the effective operation of the Quality System, Quality Management can be considered a somewhat wider interpretation of the concept of "Good Laboratory Practice” (GLP), Therefore, inevitably the basics of the present Guidelines largely coincide with those of GLP. ‘These are discussed below in Section 1.5. Note. An even wider concept of quality management is presently ‘coming into vogue: “Total Quality Management” (TOM). This fconecpt includes additional aspects such as leadership style, ethies of the work, social aspects, relation to society, ete. For an introduction to TQM the reader is referred to Parkany (1995). 1.3 Quality Assurance Proper Quality Management implies consequent imple- mentation of the next level: Quality Assurance. The ISO definition reads: “the assembly of all planned and system- atic actions necessary to provide adequate confidence that @ product, process, or service will satisfy given quality requirements.” The result of these actions aimed at the production of quality, should ideally be checked by someone independent of the work: the Quality Assurance Officer. If no QA officer is available, then usually the Head of Laboratory performs this job as part of his quality management task. In case of special projects, customers may require special quality assurance measures or a Quality Plan 1.4 Quality Control ‘A major part of the quality assurance is the Quality Controt defined by ISO as "the operational techniques and activities that are used 0 satisfy quality require ‘ments.” An important part of the quality control is the ‘Quality Assessment: the system of activities to verify if the quality control activities are effective, in other words: an evaluation of the products themselves. Quality control is primarily aimed at the prevention of errors. Yet, despite all effort, it remains inevitable that errors are be made. Therefore, the control system should have checks to deveet them. When errors or mistakes are suspected or discovered itis essential thatthe "Five W's" are trailed: ~ what error was made? = where was it made? = when was it made? = who made it? + why was it made? ‘Only when all these questions are answered, proper action can be taken to correct the error and prevent the same ‘mistake being repeated. Guidelines for Quality Management in Laboratories The techniques and activities involved in Quality Control ‘can be divided into four levels of operation: 1. First-line control: Instrument performance check, 2. Secondstine control: Check of calibration or standard- ization. 3. Third-line control: Batch contro! (control sample, identity heck). 4. Fourth-line controt: Overall check (external checks: reference samples, interlaboratory exchange program mes). Because the first two control levels both apply to the ‘correct functioning of the instruments they are often taken ‘together and then only three levels are distinguished. This designation is used throughout the present Guidelines: 1. First-line control: Instrument check / calibration, 2. Second-line control: Batch control 3. Third-line control: External check 1k will be clear that producing quality inthe laboratory is 4 major enterprise requiring a continuous human effort and input of money. The rule-of-fis is that 10-20% of the total costs of analysis should be spent on quality control. ‘Therefore, for quality work atleast four conditions should be fulfilled: = means are available (adequate personnel and facilities) + efficient use of time and means (costs aspect) + expertise is available (answering questions; aftercare) + upholding and improving level of output (continuity) In quality work, management aspects and technical aspects are inherently cobbled together and fora clear insight and proper functioning of the laboratory these aspects have to bbe broken down into their components. This is done inthe ensuing chapters ofthis manual 1.5 Good Laboratory Practice (GLP) Quality Management in the present context can be considered a modem version of the hitherto much used concept "Good Laboratory Practice” (GLP) with a some- what wider interpretation. The OECD Document defines GLP as follows: "Good Laboratory Practice (GLP) is concerned with the organizational process and the conditions under which laboratory studies are planned, performed, monitored, recorded, and reported.” ‘Thus, GLP prescribes a laboratory to work according to 4 system of procedures and protocols. This implies the ‘organization of the activities and the conditions under ‘which these take place are controlled, reported and file. GLP is « policy for all aspects of the laboratory which influence the quality of the analytical work. When properly applied, GLP should then + allow better laboratory management (including quality management) = improve efficiency (\hus reducing costs) + allow quality control (including tracking of errors and their cause) + stimulate and motivate all personnel ~ improve safety = improve communication possibilities, both internally and externally, ‘The result of GLP is that the performance of a laboratory is improved and its working effectively controlled. An important aspect is also that the standards of quality are documented and can be demonstrated to authorities and clients. This results in an improved reputation for the laboratory (and for the institute as a whole). In short, the message is + say what you do = do what you say = do it better = be able to show what you have done ‘The basic rule is that all relevant plans, activities, condi- tions and situations are recorded and that these records are safely filed and can be produced or retrieved when necessary. These aspects differ strongly in character and need to be attended to individually As an assembly, the involved documents constitute a so-called Quality Manual. This comprises then all relevant information on ~ Organization and Personnel > Facilities + Equipment and Working materials = Analytical or testing systems + Quality control = Reporting and filing of results. Since institutions having a laboratory are of divergent natures, there is no standard format and each has to make its own Quality Manual, The present Guidelines contain examples of forms, protocols, procedures and artificial situations. They need at least to be adapted and many new ‘ones will have to be made according to the specific needs, but all have to fulfil the basic requirement of usefsiness and verifiability AAs already indicated, the guidelines for Quality Manage- ment given here are mainly based on the principles of Good Laboratory Practice as they are laid down in various relevant documents such as [SO and ISO/IEC guides, ISO ‘9000 series, OECD and CEN (EN 45000 series) docu- ‘ments, national standards (e:g. NEN standards)*, as well as a number of text books. The consulted documents are listed in the Literature. Use is also made of documents developed by institutes which have obtained accreditation fr are working towards this. This concerns mainly so- called Standard Operating Procedures (SOPs) and Proto- cols. Sometimes these documents are hard to acquire as they are classified information for reasons of, ‘competitiveness. The institutes and persons which cooper- ated in the development of these Guidelines are listed in the Acknowledgements + 1S0- Intemational Standardization Organization: {EC Inter tional Electrical Commission: OECD: Organization for Economie Cooperation and Development; CEN: Furopesn Committee for Standardization, EN: European Standard: NEN: Duteh Standard Chapter 2 STANDARD OPERATING PROCEDURES 2.1 Definition ‘An important aspect of a quality system is to work according to unambiguous Standard Operating Procedures (SOPs). In fact the whole process from sampling to the filing of the analytical result should be described by a continuous series of SOPs. A SOP fora laboratory can be defined as follows: "A Standard Operating Procedure is a document which describes the regularly recurring operations relevant 10 the quality of the investigation. The purpose of a SOP is {0 carry out the operations correctly and always in the same manner. A SOP should be available at the place where the work is done”. ASOP isa compulsory instruction. If deviations from this instruction are allowed, the conditions for these should be documented including who can give permission for this and what exactly the complete procedure will be. The original should rest at a secure place while working copies should be authenticated with stamps and/or signatures of authorized persons. Several categories and types of SOPs can be distin« guished. The name "SOP" may not always be appropriate, eg, the description of situations or other matters may better designated protocols, instructions or simply regis- tration forms. Also worksheets belonging to an analytical procedure have to be standardized (to avoid jotting down readings and calculations on odd pieces of paper) ‘A number of important SOP types are: = Fundamental SOPs, These give instructions how to make SOPs of the other categories = Methodic SOPs. These describe a complete testing system or method of investigation. = SOPs for safety precautions, + Standard procedures for operating instruments, appa- ratus and other equipment ~ SOPs for analytical methods. ~ SOPS for the preparation of reagents + SOPs for receiving and registration of samples. + SOPS for Quality Assurance + SOPs for archiving and how to deal complaints, 2.2. Initiating a SOP As implied above, the initiative and further procedure for the preparation, implementation and management of the documents is a procedure in itself which should be described. These SOPs should at least mention: ‘who can or should make which type of SOP; to whom proposals for a SOP should be submitted, and who adjudges the draft; the procedure of approval; d_ who decides on the date of implementation, and who should be informed: how revisions can be made or how a SOP can be withdrawn, 1t should be established and recorded who is responsible for the proper distribution of the documents, the filing and administration (e.g. of the original and further copies) Finally, it should be indicated how frequently a valid SOP should be periodically evaluated (usually 2 years) and by whom. Only officially issued copies may be used, only then the use of the proper instruction is guaranteed. a In the laboratory the procedure for the preparation of a SOP should be as follows ‘The Head of Laboratory (Hol.) charges a staff’ member of the laboratory to draft a SOP (or the Hol. does this himself or a staff member takes the initiative). In prin- ciple, the author is the person who will work with the SOP, bat he or she should alway’ keep in mind that the SOP needs to be understood by others. The author requests a new registration number from the SOP admin- istrator of custodian (which in smaller institutes or laboratories will often be the Hol., see 2.4). The adminis trator verifies if the SOP already exists (or is drafied). If the SOP does not exist yet, the title and author are entered into the registration system. Once the writing of a SOP is, undertaken, the management must actively support this effort and allow authors adequate preparation time. In case of methodic or apparatus SOPs the author asks ‘one or more qualified colleagues to try out the SOP. In ‘case of execution procedures for investigations or proto- ‘cols, the project leader or Hol. could do the testing. In this phase the wording of the SOP is fine-tuned. When the testis passed, the SOP is submitted to the SOP adminis- trator for acceptance, Revisions of SOPs follow the same procedue, 2.3 Preparation of SOPs ‘The make-up of the documents should meet a minimum ‘number of requirements: |. Each page should have a heading and/or footing men- tioning: 4. date of approval and/or version number; 4b, a unique ttle (abbreviated if desired), &. the number of the SOP (preferably with category); page number and total number of pages of the SOP. 6 Guidelines for Quality Management in Laboratories e. the heading (or only the logo) of originals should 5. It is recommended to include criteria for the control of preferably be printed in another colour than black. the described system during operation, Categories can be denoted with a letter or combination 6, It is recommended to include a list of contents particu- of letters, 6g larly if the SOP is lengthy. + for fundamental SOP 7. It is recommended to include a list of references. ~ Aor APP for apparatus SOP + Mor METH for analytical method SOP + P or PROJ for procedure to carry out a special 2,4 Administration, Distribution, jnvestigation (project) Implementation + PROT for a protocol describing a sequence of e actions or operations = ORG for an organizational document + PERS for describing personnel matters + RF for registration form (e.g, chemicals, samples) + WS for worksheet (related to analytical procedures) From this description it would seem that the preparation and administration of a SOP and other quality assurance documentation is an onerous job. However, once the draft is made, with the use of word processors and distribution scheme of persons and department: 2. The first page, the title page, should mention: the task can be considerably eased. 4, general information mentioned under 2.3.1 above, _A model for a simple preparation and distribution including the complete title; scheme is given in Figure 2-1. This is a relation matrix 6 asummary ofthe contents with purpose and field of Which can not only be used forthe laboratory but for any application (if these are not evident from the title); department or a whole institute. In this matrix (which can if desired the principle may be given, including a _b€ given the status of a SOP) can be indicated all persons list of points that may need attention; oor departments that are involved with the subject as well any related SOPs (of operations used in the present & the kind of their involvement. This can be indicated in SOP), the scheme with an involvement code. Some of the most 4. possible safety instructions; usual involvements are (the number can be used as the @ name and signature of author, including date of — ¢°d®): signing. (It is possible to record the authors cen- ‘Taking initiative for drafting D wally in a register); 2. Drafting the document JF name and signature of person who authorizes the 3. Verifying introduction of the SOP (including date) 4, Authorizing 3. The necessary equipment, reagents (including grade) _S. Implementing/using and other means should be detailed 6. Copy for information lew 7. Checking implementation 4. A clear, unambiguous imperative description is given AP vin ina language mastered by the user. Documents Fig. 2-1. Matrix of information organization (see text). Chapter 2. Standard Operating, Procedures ‘There is a multitude of valid approaches for distribution of SOPs but there must always be a mechanism for informing potential users that a new SOP has been written oo that an existing SOP has been revised or withdrawn. It is worthwhile to set up a good filing system for all documents right at the outset. This will spare much incon- venience, confusion and embarrassment, not only in internal use but also with respect to the institute's man- ‘agement, authorities, clients and, if applicable, inspectors of the accreditation body. ‘The administrator responsible for distribution and archiving SOPs may differ per institute. in large institutes or institutes with an accredited laboratory this will be the Quality Assurance Officer, otherwise this may be an officer of the department of Personnel & Organization or still someone else. In non-aceredited laboratories the administration can most conveniently be done by the head of laboratory or his deputy. The administration may be done in a logbook, by means of a card system or, more conveniently, with a computerized database such as PerfectView or Cardbox, Suspending files are very useful for keeping originals, copies and other information of documents. The most logic system seems to make an appropriate grouping into categories and master index for easy retrieval. It is most convenient to keep these files at a central place such as the office of the head of labora- tory. Naturally, this does not apply to working documents that obviously are used at the work place in the labora- tory, ©... instrument logbooks, operation instruction manuals and laboratory notebooks. The data which should be stored per document are ~ SOP number + version number = date of issue = date of expiry > title = author = status (title submitted; being drafted; draft ready; issued) + department of holders/users = names of holders = number of copies per holder if this is more than one = registration number of SOPs to which reference is made = historical data (dates of previous issues) The SOP administrator keeps at least two copies of each SOP; one for the historical and one for the back-up file. ‘This also applies to revised versions. Superseded versions should be collected and destroyed (except the copy for the historical file) to avoid confusion and unauthorized use. Examples of various categories of SOPs will be given in ‘the ensuing chapters. The contents of a SOP for the adm- inistration and management of SOPs can be distilled from the above. An example of the basic format is given as Model F 002 (p. 8). 2.5 Laboratory notebooks Unless recorded automatically, raw data and readings of measurements are most conveniently written down on ‘worksheets that can be prepared for each analytical met- hod or procedure, including calibration of equipment. In addition, each laboratory staff member should have a personal Notebook in which all observations, remarks, calculations and other actions connected with the work are recorded in ink, not with a pencil, so that they will not be erased or lost. To ensure integrity such a notebook must meet a few minimum requirements: on the cover it must carry & unique serial number, the owner's name, and the date of issue, The copy is issued by the QA officer or head of laboratory who keeps a record of this (eg. in hisher own Notebook). The user signs for receipt, the QA officer of Hol. for issue, The Notebook should be bound and the pages numbered before issue (loose-leaf bindings are not GLP!), The first one or two pages can be used for an index of contents (to be filled in as the book is used). ‘Such Notebooks can made from ordinary notebooks on sale (before issue, the page numbering should then be done by hand or with a special stamp) or with the help of ‘a word processor and then printed and bound in a graphi- cal workshop. The instructions for the proper use of a laboratory notebook should be set down in a protocol, an example is given as Model PROT 005 (p. 10). A model for the pages in a laboratory notebook is given on p. 12. 2.6 Relativization as encouragement In the Preface it was stated that documentation should not bbe overdone and that for the implementation of all new Quality Management rales the philosophy of a step-by- step approach should be adopted. It is emphasized that protocols and SOPs, as well as the administration involved, should be kept as simple as possible, particu- larly in the beginning, The Quality Management system must grow by trial and error, with increasing experience, by group discussions and with changing perceptions. In the beginning, attention will be focused on basic oper- ational SOPs, later shifting to record keeping (as more and more SOPs are issued) and filling gaps as practice reveals missing links in the chain of Quality Assurance. Inevitably problems will turn up. One way t0 solve them is to talk with people in other laboratories who have faced similar problems, Do not forget that Quality Management is a tool rather than a goal. The goal is quality performance of the labora- tory Guidelines for Quality Management in Laboratories | STANDARD OPERATING PROCEDURE Page: 1 #2 Loco | Model: F 002 Version: 1 Date: 95-06-21 \dministration of Standard Operating Procedures File: 1, PURPOSE To give unambiguous instruction for proper management and administration of Standard Operating Procedures as they are used in the Regional Soil Survey Institute (RSSI). 2. PRINCIPLE Standard Operating Procedures are an essential part of a quality system. For all jobs and duties relevant ‘operating procedures should be available at the work station. To guarantee that the correct version of the instruction is used copying Standard Operating Procedures is prohibited. Standard Operating Procedures are issued on paper with the heading printed in green. 3. FIELD OF APPLICATION Generally for use in the quality system of RSS/ but more specifically this instruction is for use in the Chemistry Department. 4, RELATED SOPs -Fou ‘The preparation of SOPs for apparatus F012 ‘The preparation of SOPs for methods = PROJ 001 The preparation of SOPs for spet 5. REQUIREMENTS. Database computer program, PerfectView or Cardbox 6. PROCEDURE 6.1 Administration ‘The administration of SOPs for the Chemistry Department can be done by the Head of Laboratory. (See these Guidelines, 2.2) 6.3. Revision of SOPs (see these Guidelines, 2.2) ‘Author: Sign. QA Officer (sign.): Date of Expiry: Chapter 2. Standard Operating Procedures STANDARD OPERATING PROCEDURE Page: 2 #2 LOGO | Model: F 002 Version: 1 Date: 95-06-21 Title: Administration of Standard Operating Procedures File: 65 Distribution of SOPs ‘When the SOP fulfils all the necessary requirements itis printed. The author hands over the manuscript (or the floppy disk with text) to the SOP administrator who is responsible for the printing. The number of copies is decided by him/her and the author. Make matrix of distribution (see Guidelines for Quality Management Fig. 2-1). ‘The author (or his successor) signs all copies in the presence of the administrator before distribution. As the ‘new copies are distributed the old ones (if there was one) are taken in, For each SOP a list of holders is ‘made. The holder signs for receipt of a copy. The list is kept with the spare copies. Copying SOPs is forbidden. Extra copies can be obtained from the SOP administrator. Users are responsible for proper keeping of the SOPs. If necessary, copies can be protected by a cover or foil, ‘and/or be Kept in a loose-leaf binding. 7. ARCHIVING Propet archiving is essential for good administration of SOPs. All operating instructions should be kept up-to- date and be accesible to personnel. Good Laboratory Practice requires that all documentation pertaining to a test or investigation should be kept for a certain period. SOPs belong to this documentation. 8 REFERENCES Mention here the used Standards and other references for this SOP. 10 Guidelines for Quality Management in Laboratories STANDARD OPERATING PROCEDURE Page: 1 #2 Loco | Model: PROT 005 Version: 1 Date: 95-11-28 Title: The use of Laboratory Notebooks | 1. PURPOSE To give instruction for proper lay-out, use and administration of Laboratory Notebooks in order to guarantee the integrity and retrievability of raw data (if no preprinted Work Sheets are used), calculations and notes pertaining to the laboratory work. 2. PRINCIPLE Laboratory Notebooks may either be issued to persons for personal use or to Study Projects for common use bby participating persons. They are used to write down observations, remarks, calculations and other actions in connection with the work. They may be used for raw data but bound preprinted Work Sheets are preferred for this. 3, RELATED SOPs F001 Administration of SOPs PROJ 001 The preparation of SOPs for Special Investigations 4. REQUIREMENTS Bound notebooks with about 100-150 consecut suitable; a spiral binding is very convenient. ly numbered pages. Any binding which cannot be opened is Both ruled and squared paper can be used. On each page provisions for dating and signing for entries, and signing for verification or inspection may be made. 5. PROCEDURE S.1 Issue Notebooks are issued by or on behalf of the Head of Laboratory who keeps a record of the books in circulation (this record may have a format similar to a Laboratory Notebook or be part of the HoL’s own Notebook). ‘On the cover, the book is marked with an assigned (if not preprinted) serial number and the name of the user (or of the project). On the inside of the cover the HoL. writes the date of issue and signs for issue. The user (or Project Leader) signs the circulation record for receipt. 52 Use All entries are dated and made in ink. The person who makes the entry signs per entry (in project notebooks) ‘or at least per page (in personal notebooks). The Head of Laboratory (and/or Project Leader) may inspect or verify entries and pages and may sign for this on the page(s) concemed. Auth QA Officer (sign.) Chapter 2. Standard Operating Procedures ‘STANDARD OPERATING PROCEDURE | Page: 2 # 2 Loco | Model: PROT 005 Version: 1 | Date: 95-11-28 Title: The use of Laboratory Notebooks | If entries are corrected, this should be lined out with a single line so that it is possible to see what has been corrected. Essential corrections should be initialed and dated and the reason for correction stated. Pages may not be removed; if necessary, a whole page may be deleted by a diagonal line, 5.3 Withdrawal When full, the Notebook is exchanged for a new one. The Hol. is responsible for proper archiving. A notebook belonging to a Study Project is withdrawn when the study is completed. ‘When an employee leaves the laboratory for another post (s)he should hand in her/his notebook to the Hol. 6, ARCHIVING ‘The Head of Laboratory is custodian of the withdrawn Laboratory Notebooks. They must remain accessible for inspection and audit trailing, 7. REFERENCES 2 Guidelines for Quality Management in Laboratories Model page of laboratory notebook 33 g SUBJECT Verified by: Signamre: WorTest 10. Date: Fite: 13 Chapter 3 ORGANIZATION AND PERSONNEL In this chapter the place and internal structure of the Organization or Institute, of which the laboratory isa part, is discussed. The description of the internal structure inherently includes the job description of the various positions throughout the organization as well as a list of all the involved personnel, their qualifications, knowledge, ‘experience and responsibilities. Because of the continuity ‘of the work itis important that in case of illness of other absence of staff replacement by a qualified and experi- enced colleague is pre-arranged. 3.1 Function and aims of the Institute ‘The function and/or the aims of the institute should be drawn up in order to set a framework defining the ‘character ofthe laboratory. This description should rest in several places so that it can easily be produced upon request (Directorate, Secretariat, heads of departments or sections including Personnel & Organization, as well as the public relations officer). As an example, the aims of ISRIC, an institute with an analytical laboratory, are given on p. 15 and 16. 3.2 Scope of the laboratory If the field of work, of the scope of the laboratory, is not made specifically clear in the description of the Institute's activities, it should be elaborated in a separate statement Soil analysis for soil characterization and land evaluation is not the same as analysis for soit fertility purposes and advice to farmers. Such a statement should be kept with the overall statement about the scope of the institute 33. Organigram ‘The organizational set-up of an institute can conveniently be represented in a diagram, the organigram (also called ‘organogram). An organigram should be drawn by the department of Personnel & Organization (P&O) (or equivalent) on behalf of the Directorate. Since the organ- ization of an institute is usually quite dynamic, frequent updating of this document might be necessary. For the laboratory an important aspect of the organigram is the hierarchical line of responsibilities, particularly in case of problems such as damage, accidents or complaints from clients. Not all details of these responsibilities can be given in the main organigram. Such details are to be documented in sub-organigrams, the vacious job descrip- tions (see 3.5) as well as in regulations and statutes of the institute as a whole As an example the simplified organigram of ISRIC is given on p. 17 (Model ORG 001); a sub-organigram of the laboratory is given on a Job Description Form (Model PERS O11, p. 18). 3.4 Description of work processes The way work is organized in the laboratory should be described in a SOP. This includes the kind and frequency ‘of consultations and meetings, how jobs are assigned to laboratory personnel, how instructions are given and how results are reported. The statement that personnel are Protected from improper pressure of work can also be ‘made in this SOP. 3.5 Job descriptions, personnel records, job allocation, replacement of staff Quality assurance in the laboratory requires that all work is done by staff which are qualified for the job. Thus, to ‘ensure a job is done by the right man or woman, it is essential for the management to have records of all perso- nal skills and qualifications of staff as well as of the required qualifications for the various jobs. 3.5.1 Job descriptions The professional requirements for each position in an ‘organization has to be established and laid down in a Job Description Form which for clarity may carry an organi- gram or sub-organigram showing the position (Model PERS OII, p. 18). ‘The job description of the heads of departments or sections is usually done by the department of P&O in consultation with the directorate, other jobs are done by P&O (on behalf of the directorate) in consultation with the respective heads of departments of sections. Copies should rest with P&O and the heads of departments concemed, as well as with the person(s) filling the position 3.5.2 Personnel records The list of laboratory personnel with their capabilities and skills is made by the head of laboratory in consultation with the department of Personiel & Organization and both should have a copy. A record of the personal qualifi- cations and skills of each staff member can be called a ‘Staff Record Form and a model is shown here as PERS 012 (p. 21). When this form is completed the place of the person in the organization can be indicated by a code of 4 Guidelines for Quality Management in Laboratories the position as shown in the sub-organigram drawn on the Job Description Forms (Model PERS 011, p. 18), in this case capitals 4, B,C, etc From the Job Descriptions and the Qualifications of Staff (PERS 011 and PERS 012) a short-list can be derived indicating the positions of staff. An example of such a list is Model PERS 013 (p. 22). For quick refer- ence, a matrix table is a convenient and surveyable way of listing the skills of staff. This is shown in Model PERS. (014 (p. 23) where per person is recorded for which job he or she is qualified. In fact, such a proficiency list is the basis of the job allocation to staff. This allocation of jobs, i.e. a listing of all relevant tasks with the persons who perform the tasks (who-is-doing-what), including substi- tutes, can be indicated on a Job Allocation Form (Model PERS 015, p. 24, 25). Combinations of lists are always possible of course, e.g. PERS 013 and 014) All these lists are prepared by the heads of departments and P&O, and should be made available to the directorate, secretariat, and heads of other departments. Staff of departments should at least have access 10 a copy of these lists oftheir own department. Although for small working groups such lists may seem to be overdone and perhaps superfluous, in departments with many people they are necessary 3.53 Substitution of staff ‘The absence of a staff member may create a problem as @ part of the work of the laboratory is interrupted. For holidays this problem is usually limited as these are planned and measures can be taken in advance: a job can bbe properly completed or a substitute can be organized in time, Unexpected absence, such as in the case of iliness, presents a different situation, as for certain procedures a substitute needs to be arranged at short notice and a petson might not to be readily available, The extent of disruption varies with the type of job concerned. Some jobs can be left unattended for a few days but others need instant take-over, e.g. when extracts have been prepared and need to be measured soon after. Other jobs are essen tial for the continuity of the work programme, If the preparation of samples is interrupted, the supply to the laboratory stops. When moisture determinations are not done, the calculation of results of many analyses cannot bbe done. Usually the head of laboratory, knowing hi staff, will ask a colleague of the absentee to stand in However, such a simple solution may not always be at hand, The colleague may be engaged in a job at the time, hhe may be absent also, or the head himself may be away and then his deputy, who may not have the same insight, hhas to act. To cope with these situations a scenario for substitution has to be available. To a large extent such a scenario is based on the personal qualifications, skills and ‘experience of the laboratory staff, Sometimes, help must sought from outside: when the necessary expertise is not available, or when the absence is too protracted. A scenario for substitution can be made in several ways, The most obvious way is based on the Jab Alloca- tion Form (PERS 015, p. 24). First on the list for each task is the one who normally performs the job. In case of absence and no one is available for substitution several options can be considered. 1. The job is not carried out (pertiaps someone becomes available soon). 2. Someone from outside the laboratory is hired or borrowed (having ascertained that he or she has the necessary skills) 3. The job is put out to contract (ascertain that the other laboratory has satisfactory quality standards) In case of incidental short-term substitution of a staff ‘member in the laboratory, e.g. in the case of illness, this change from the normal occupation can usually adequately be documented in laboratory Notebooks and on the various worksheets and/or data sheets pertaining to the Jobs concemed. In any case, the head of laboratory should keep a record in his own Notebook. More permanent changes in staff or in the organization, however, require ‘more paper work. Al such changes have to be recorded ‘on all the relevant registration forms mentioned above, Therefore, these must be revised accordingly. As observed in Chapter 1, the most onerous aspect of the procedure is, the distribution of the revised documents to the persons and offices where they are required (and the obsolete ones taken back). On the other hand, should the work involved provide an incentive to limit changes in laboratory staf, then it serves an unintended additional purpose: a rapid tum-over of staff is, generally, detrimental to the continu- ity and quality of the work. 3.6 Education and training of staff To maintain or improve the quality of the work, essential that staff members follow trainis courses from time to time. These may concem new developments in analytical techniques or approaches, data handling, the use of computers, laboratory management (such as Quality Management and LIMS) or training. in the use of newly acquired instruments. Such training can be given within the institute, by outside specialists, or centrally conducted courses can be attended, if necessary abroad, In certain cases it may be worthwhile to second someone to another laboratory for certain period to get in-service training and experience ina different laboratory culture. Ideally, after training or attending a course, the staff member should report and convey his experience or knowledge to colleagues and make proposals for any change of existing procedures or adoption of new prac- tices to improve the performance of the laboratory. Tests to assess the proficiency of analysts are discussed in Chapter 6. In many laboratories itis common practice that techni- cians change duties from time to time (e.g. each half year) fr carry out more than one type of analysis in order to avoid creating bad habits and to increase job satisfaction ‘and motivation. An advantage is gained in an increased flexibility of the laboratory staff with respects fo skills, Chapter 3. Organization and Personnel 1s but a disadvantage is the possible reduction of produc- tivity and quality of results in the transitional period. the rules of the laboratory in general and in particular to details of his/her new job. In order to ensure that this is properly done itis useful to draw up a SOP with a check- list ofall aspects involved. A programme of training and monitoring the settling into the job has to be made. After ‘a probationary period the head of laboratory will make an evaluation and report this to P&O. If applicable, a final decision as to the appointment can be made. 3.7. Introduction of new staff ‘When a new employee is appointed in the laboratory, he ‘or she should be properly introduced to the other staff, 0 Example of concise description of function and aims of an institute INTERNATIONAL SOIL. REFERENCE AND INFORMATION CENTRE (ISRIC), W: ingen, Netherlands ISRIC Position ‘The Intemational Soil Reference and Information Centre, ISRIC, isa ene for documentation, research, and taining about the word’ soils, wits emphasis on the resoures of developing counties 1. bouses a large collection of soil monoliths With related data and documents, books, reports and maps ISRIC collects, generates and transfers information on soils by lecturing and by publishing monographs and papers on the col lected materials and research date, Training courses are given, usually in developing countries, Participation in scientific working groups is directed towards developments in soil genesis, classification and correlation, ‘mapping, soil databases (eg. the use of Geographic Information Systems ~ Gi), and land evaluation, ISRIC was bom out of an initiative ofthe Intemational Society of Soil Science. It was adopted by Unesco as ove of its ‘sctvities in the field of earth sciences. The Centre was founded in 1966 by the Government of the Netherlands. Advice on the programme and activities of ISRIC is given by a Scientific Advisory Couneil with members from the Dutch ‘gricultaral scientific community and from international organisations such as FAO and Unesco. Core funds are provided by the Dutch Directorate-General for Development Cooperation. Project activities are generally externally funded, Aims © To serve as a Data Centre for documentation about soil as 2 natural resource, through assembling soil monoliths, reports, ‘maps and ether information on soils ofthe world, with emphasis on the developing countries © To contribute to an increase in the understanding ofthe sol for sustained utlization in a changing global environment. © To improve the accessibility of so and terrain information for the widest possible range of users through applied research, improvement of research methods, and advice on the establishment of soil laboratories, soil reference collections ‘and databases © To contribute to developments in soil classification, soi! mapping and land evaluation and in the development of Beographically referenced soils and terrain digital databases. Visitors services ISRIC provides information on soils of the world, on the preparation of soil monoliths for display, and techniques of soil information systems, et. Visitors may consult the collections of soil monoliths, reports, maps, bucks and soil databases through + individual visits during which visitors may cossult the collections with or without help of the sta. + soup visits which include one or two day visits by groups of students to get an introduction to soil classification andlor to practice elasiication. = individoal guest research of 3-12 months during which scientists may use ISRIC"s collections fora specific study Depending on the purpose of the study and the degree of staff involvement, a fee may be charged. ISRIC provides staff for snalytical services, consulting and traning, against payment. Details of tariffs will be provided on request Activities Soil monolith collections and NASRECs Assembling and analyzing representative profiles of the major soils of the weld and displaying a reference eolletion of sil ‘monoliths at ISRIC. The present collection comprises more than 900 profiles from over 70 countries. Assembling a colletion of laterite profiles and developing a descriptive terminology and classification of laterite for interdsciplinary use (CORLAT). Advising on the establishment of national soil reference collections and databases (NASRECs) for training, research, land use planning and agricultural extension services in individual countries. A bi-annual Uneseo-ISRIC training course is given for this purpose. On-site support is given on project basis. 16 Guidelines for Quality Management in Laboratories Laboratory + Analyzing samples, representative of the soil collection, testing and improving methods and procedures of soil analysis. + Advising and instructing soil laboratories on organization, equipment and procedures with the aim to improve their performance. Aspects are the introduction of Quality Management and the development of systems for quality contro: « Laboratory Information Management System for soil and plant laboratories (SOILIMS). + Seat ofthe Bureau ofthe Wageningen Soil, Plant and Water Analytical Laboratories (WaLab), a cooperation of four ‘Wageningen research laboratories to perform a wide range of quality analyses for third parties. Soil inventory and mapping ‘Assembling a collection of soil and related maps, geo-referenced databases and reports for consultation and various uses. ISRIC's Soil Information System (ISIS) contains data ofthe collected soil profiles. ISRIC has a library and an extensive map, Weight indication does not light up, decimal point does not light up: Cause: - Power supply = Supply voltage - Balance not switched on + Fuse defective (Warning: when changing fuse, puil plug from socket!) ~ Problem: Weight indication does not light up, decimal point does Cause: ~ Overload = Problem Weight indication is changing continuously Cause; - Balance not switched on long enough, operating temperature not yet reached ~ Unsatisfactory installation conditions (draught, vibrations) = Problem: Weighing results incorrect Cause: - Unsatisfactory installation conditions = Balance not levelled ~ Sensitivity setting incorrect (solution: adjust balance) If balance cannot be made to fun In properly, call qualified assistance. 2 Guidelines for Quality Management in Laboratories Source: Winsnd Staring Centre foe Integrated Land, Soil and Water Research (SC-DLO), Wageningen, Loco. STANDARD OPERATING PROCEDURE Page: 5 #5 Model: APP 062 ‘Version: 1 Date: 95-02-02 Title: Operation of electronic balance Sartorius 3708 MP 1 | | 9 MAINTENANCE 9.1. Maintenance by user ~ Keep balance clean + Calibrate and adjust balance weekly and after each removal = Removing the balance: Pull plug from socket Remove balance Connect plug with socket Level balance ‘Switeh on balance Wait for 20 minutes (or less if balance was warm) and adjust balance as described in Sections 7.2 and 73 of this SOP. 9.2 Mai nce by supplier Have balance serviced, calibrated and adjusted once a year 10 LOGBOOK record in Maintenance (and/or Calibration) Logbook: ~ Ail malfunctions encountered ~ All actions taken to solve problems ~ All calibrations 11 REFERENCE Instruction for Installation and Operation of 3707 MP I. No date. Sartorius-Werke, Gottingen, Germany. Chapter 5. Materials: Apparatus, Reagents, Samples STANDARD OPERATING PROCEDURE Page: 1 #5) LOGO | Model: APP 071 Version: 1 Date: 94-11-22 Title: Operation of pH meter Metrohm E 632 CONTENTS Page 1 PURPOSE 2 2: PRINCIPLE 2 3. SPECIFICATIONS, 4 RELATED SOPs 3. SAFETY INSTRUCTIONS 2 6 OPERATION 6.1 Principle 6.2. Materials 63 Reagents 64 Precautions 63 Accuracy 66 Starting, 6.7 Calibration and adjustment 6.8 Measurement 7 CHECKING AND MAINTENANCE s 8 REFERENCES. 5 Author Sign Head (sign.): Date of Expiry: 6 Guidelines for Quality Management in Laboratories ‘STANDARD OPERATING PROCEDURE Sse ArT [ede es Title: Operation of pH meter Metrohm E 632 1 PURPOSE ‘To measure pH of soil paste, extracts, solutions, waters. 2 PRINCIPLE ‘The potentiometric pH measurement is based on measuring the difference in electrical potential between solution and electrode. It is a relative measurement dependent on electrode and temperature. Therefore, the pH meter must be calibrated and adjusted (standardized) with standard buffers of known pH. 3. SPECIFICATIONS With glass electrodes the pH range is 0 - 12. Readability: 0.01 unit. 4 RELATED SOPs F 002 ‘Administration of SOPs Foul Standard instruction for drafting apparatus SOPS APP 041 Maintenance Logbook APP 042 User Logbook ‘APP 003 Instrument Identification List ‘APP 004 Instrument Maintenance List APP... Inspection and maintenance of pH meter Metrohm E 632 APP... Inspection and maintenance of combination glass electrodes 5 SAFETY INSTRUCTIONS Not applicable. 6 OPERATION 6.1 Principle ‘The standardization of the pH meter consists of two adjustment steps. The deviation of the preset ("true") value of buffer solutions is electronically compensated. The first step is always executed with a pH 7 buffer, whereas the second step can be done with a lower (e.g. pH 4) or higher (pH 9 or 10) buffer depending on the range in which the sample measurements are made exceptional cases a buffer of very low pH may be required, e.g., pH 2). 6.2 Materials Thermometer, -10 t0 100 °C, accuracy 0.5 °C. 63 Reagents Buffer solutions pH 4.00, 7.00 and 9.00 or 10.00 (25 °C). Dilute standard analytical concentrate ampoules according to instruction. (Note: Standard buffer solutions of which the pH values deviate slightly from these values can also be used). Water. Deionized or distilled water, with electrical conductivity <2 y.S/em and pH > 5.6 (Grade 2 water according to ISO 3696). Note: If no standard ampoules are used buffer solutions can be prepared as follows (these solutions can also be prepared to act as “independent” standards): Chapter 5. Materials: Apparatus, Reagents, Samples 6s STANDARD OPERATING PROCEDURE Page: 3 #5 Loco [ Model: APP 071 Version: 1 Date: 94-11-22 Title: Operation of pH meter Metrohm E 632 Buffer solution pH 4. Dissolve 10.21 g potassium hydrogen phthalate, C,H,KO,, in water in a 1 L volumetic flask and tmake to volume with water. (First dry the potassium hydrogen phthalate at 110 °C for at least 2 hrs.) ‘The pH of this 0.05 M phthalate solution is 4.00 at 20°C and 401 at 25°C. ‘Buffer solution pil 7. Dissolve 3.40 g potassium dikydrogen phosphate, KH,PO,, and 3.55 g disodium hydrogen phosphate, Na,HPO,, in water in a1 L volumetnc flask and make to volume with water. (Both phosphates should first be dried at 110 °C for at least 2 hs. The pl of this 0.25 M (of each phosphate) solution is: 6.88 at 20°C and 6.86 at 25°C. fir solution pH 9. Dissolve 3.80 g disodium tetraborate decahy date, Na,B,0,10H,0 (borax) in water in a 1 L Volumetric flask and make to volume with water. (Note: Observe the expiry date of borax: this may lose crystal water upon aging.) ‘The pl of this 0.01 M borax solution is 9.22 at 20°C and 9.18 at 25 °C. 64 Precautions + The electrode must be stored in a 3 MKC! solution, + The diaphragm of the electrode must be submerged in the solution during measurement. + The electrolyte level inside the electrode must be above the fevel of the solution being measured. 65 Accuracy (bias) ‘The pH is readable in 2 decimals. For standardization procedures and the preparation of reagents the second decimal has significance and can be used. For the measurement of soil suspensions and extracts the second decimal usually has no meaning and the result should be rounded off to one decimal. (For rules of decimal significance and rounding off see Chapter 7 of these Guidelines for QM). 66 Starting = Connect electrode with socket on the back of the instrument. = Switch on mains with push button 7 (see Figure 1). The instrument is now ready for use, ~ If necessary, push button 3 (stand-by) and button 5 (pH) and set switch 13 (slope) on 1.00. 6.7 Calibration and adjustment ‘These should always be performed after: + switching on the pH meter = replacement of electrode = checking the calibration and the devi appears to exceed 0.08 unit. When the pH meter is on and already adjusted then only a check of the adjustment is needed (described in Section 7.1 of this SOP). Wn of the pH from the theoretical value of the standard buffer 6.7.1 Calibration step 1 ~ Transfer sufficient standard buffer solution pH 7.00 to 50 ml or 100 ml beaker. + Measure temperature of bufler and set switch 14 (temp. compensation) to this temperature, = Immerse electrode in buffer solution and push button 4 (measure). ~ With button 6 (W..,) adjust value on display (8) to theoretical pH value of the buffer at the measured temperature. (Note: this value can be read from a table enclosed with the standard ampoule). = Push button 3 (stand-by), Rinse electrode with water. Setting of switch 6 (Uisq,) should now not be changed any more. 6.7.2 Calibration step 2 + Transfer a sufficient volume of one of the two other buffer solutions (pH 4 or 9) to a $0 ml or 100 mil beaker. (Note: this second buffer is chosen such that the pH of the solution to be measured falls in between the first and second calibration buffer). ~ Measure temperature of buffer and adjust switeh 14 (temp, compensation) to this temperature. Guidelines for Quality Management in Laboratories 0.) Digital-pH-Meter E 632 FRONT PANEL, * minus sign (polarity) (@) digital dteptey athe ky” sip rerrnge fnseation for push button © Clamping lever Base for stand rod © jatand by? posh button Inoperaesbe position 7 © shows the cou Vu" suit tercvoleame @ set on x 7 tpn or Ot bon socnce’ GS positive @ ‘sneasure” push button “U/av™ switen suring position foe high-impedance a © rents (cedar voltages) © “pu ovitcn © “Vcomp” counter-voltage shvestly to the pa value of OOO “U/mv" switeh with yor © ®@ Relative electrode slope ® ‘Temperature compensation Set to the tonperatuce oF @ mains svitch push button METROW AG Eekonine Magu, CH-2100 Hera, Schwet Telon O71 51 1884 Telex 77267 Fig. 1. Front panel of Metrohm pH meter E 632. Chapter 5. Materials: Apparatus, Reagents, Samples o STANDARD OPERATING PROCEDURE Page: 5 #5 LoGO | Model: APP 071 Version: 1 Date: 94-11-22 Title: Operation of pH meter Metrohm E 632 + Immerse electrode in buffer solution and push button 4 (measure). > With switeh 13 (slope) adjust the value on the display to the theoretical pH value of this buffer. (Note: this value can be read from a table enclosed with the standard ampoule) The setting of switch 13 may not be tower than 0.95. If this condition is not met, this electrode may not be used for the measurement and must be exchanged for another one which does meet the condition. = Push button 3 (stand-by) and rinse electrode with water. = As a check, repeat readings of buffers (pH 7 first) and readjust according to Step | and 2 if necessary. 68 Measurement ~ Measure temperature of solution (or suspension) to be measured and adjust switch 14 (cemp. compensation) to this temperature. = Irnmerse electrode in solution (or suspension) to be measured, + Push button 4 (measure) and read pH value. Note: For Quality Control it is essential to include measurement of an independent buffer solution of inown pH (as a check on calibration) and of a control sample (in each batch, to check the: sytem under ‘measuring conditions) + Push button 3 (stand-by), rinse electrode with water and place in electrode holder filled with 3 MV KCL solution = Enter use in User Logbook. 7 CHECKING AND MAINTENANCE 7A Checking of adjustment Checking of the adjustment of previously adjusted pH meters (verific ~ Prior to each new use of the instrument = During batch measurement. The frequency is indicated in the procedure of the investi every 50 oF 100 measurements or once every hou), ‘This verification is done with at least one of the calibration buffers indicated in Section 6.3. If the deviation ‘exceeds 0.05 unit from the preset value, the instrument must be recatibrated and adjusted as described in Section 6.7 above, ion) is needed: n (eg. after 7.2. Inspection and maintenance of electrodes Periodical inspection of the pH electrodes, as well as inspection after complaints about malfunctioning. must be carried out by a qualified technician and is described in SOP Model APP ... 7.3 Inspection and maintenance of pH meter Periodical inspection of the pH meter, as well as inspection after complaints about malfunctioning must be carried out by a qualified technician and is described in SOP Model APP .. 8. REFERENCES Metrohm, Instructions for use, digital pH-meter £632. Metrohm, Application Bulletin 188/le. Bates, R.G, (1973) Determination of pH, theory and practice. John Wiley & Sons, New York. DIN 19266, pH-Messung, Standardpuffertosungen. 1SO 3696, Water for analytical laboratory use. Specification and test methods. ‘Source: Deft Geotechncs, Delft Guidelines for Quality Management in Laboratories RF 032 Reagents Book Chapter 5. Materials: Apparatus, Reagents, Samples oo STANDARD OPERATING PROCEDURE Page: 1 #1 LOGO | Model: PROT O11 Version: Draft Date: 96-03-26 Title: Protocol for custody chain of samples 1 PURPOSE, ‘To organize the pathway of samples through the institute 2 PRINCIPLE From the arrival at the institute until the discarding or final storage, samples usually go through several hands and are processed at several places. To ensure their integrity, traceability and to prevent that they get lost, their pathway and the responsible personnel involved ("ehain of custody") must be documented 3. RELATED SOPs =RFOI1 Form for accepting delivery of samples = RF O01 Sample List = RF 021 Form for accepting order for analysis, RF... Sample Storage Logbook RF... Sample Location Logbook + PROT... Storage of samples = PROT... Disposal of sample material 4 PROCEDURE 4.1 Upon arrival of samples at the institute an authorized officer fills out form RF 011 (protocol for accepting delivery of samples). 4.2 If there is a regular custodian, the samples are handed over to him/her. (The custodian can be the officer who received the samples). 4.3 Document RF O11 is taken to the person responsible for further processing (eg. Project Officer, Head of Laboratory). This person signs for acceptance and keeps a copy of the form. Another copy is made for the Work Order File prepared for the corresponding work order (This file contains hard copies of all relevant information and documents conceming the work order). The original is kept at a designated place (¢.g. book of forms RF 011). ‘Note. If samples can be received by more than one person or at more than one location/deparunent, more than one book or file of forms RF O12 may be kept. The forms RF O11 could then be differentiated with a suffix (eg. A. B. ee) 4.4 The whereabouts of samples are recorded by the custodian in a Sample Location Logbook. If samples are stored behind lock and key, anybody taking out (sub)samples has to sign for this in a Sample Storage Logbook. 45. After completion of the analytical work, the sample is (re)stored for possible later use. The duration of storage is indicated in the Sample Storage Logbook. It is useful to record the location also in the Work Order File (e.g. on the Order Form RF 021). (Duration of storage may be determined by agreement with customer or by usual procedure of the Institute, ¢g, 1 year or indefinitely. This is also recorded on the order form RF 021.) ‘Author: Sian. QA Officer (sign. Date of Expiry. 70 Guidelines for Quality Management in Laboratories STANDARD REGISTRATION FORM. | Serial No: A Loco | Model: RF O1I-A Version: 2 Page: 1 #1 Title: Form for accepting delivery of samples Date: 96-01-22 Date of arrival Work order no.: Name Client/Project Address Carrier Origin of samples Number & kind of samples | a soil plant / water samples* b,c ring or core samples (or: .... boxes with core samples) Condition of samples* ‘Sample list enclosed* Other information enclosed (Order for analysis enclosed* ‘Type of packaging* Number of packages Condition of package* canine ther (specifi) moist / dry / unknown yes /no (if list is missing, make one for Work Order File) 2 yes/no crate / cardboard bex / bag / other + undamaged / damaged (specifi) Samples received by sign: Sampies placed in custody of sign: This document passed to Project officer (name) sign: Laboratory (name) sign. Other (name) : sign: Remarks: * Circle as appropriate Chapter 5. Materials: Apparatus, Reagents, Samples 1m Kind or particulars of material relevant for analytical approach Sample list correct?* Condition of samptes* Analytical programme submitted* All samples same programme? Requested date of completion Sample residue* STANDARD REGISTRATION FORM Serial No. LOGO | Model: RF 021 Version: 3 Date: 96-12-06 Title:_Form for accepting order for analysis Page: 1 #2 Date Work order no.: ‘Name Client/Project : Address Origin of samples Number & kind of samples a soil / plant / water samples* DB. somnine Fing. oF core samples © sown other (specifi) yes/no (sithout proper list, order cannot be processed) moist / dry yes/no (tick requested analyses overleaf) yes (no (if "no", describe under Remarks overleaf) discard / store indefinitely / store until (date): Order accepted by (on behalf of lab) strict sign Entered into SOILIMS date sign: Order Confirmation sent to client date: sign Change in Registration by date: signs Entered into SOILIMS date: sign, Order Confirmation to client; date: sign: Remarks: * Circle as appropriate Guidelines for Quality Management in Laboratories ‘Tick requested aualyves: RF 021 Page: 2 #2 Procedure Code(s) Procedure Code(s) © Preparation 0 _Dithionite extraction 0 pitH,O © _Acid oxalate extraction © pH-KCI © _Na pyrophosphate extraction © EC, © Prretention © Particle-size analysis O_pH-NaF (specify fractions below) © ODOE O _Water-dispersible clay © Melanic index 0 CEC DTPA extr. (Cu, Fe,Zn, Mn) O-_Exchangeable bases Boron (hot water) © Exchangeable acidity Exchangeable Al Organic carbon © Total carbon Saturation extract 1:3 extract © Carbonate equivalent © Available phosphate elololololololofo Gypsum © pFt 0-1-15-2-23-27-34-42 © Bulk density © Specific surface area ° ° ‘X-ray diffraction* = clay / whole sample / other fractions: treatments: © Guinier photo* clay / whole sample / other fractions: .. treatments: © Plant analysis (specify below) © Water analysis (specifi below) Remarks: * Circle as approriate B 6.1 Introduction In the preceding chapters basic elements for the proper ‘execution of analytical work such as personne!, laboratory facilities, equipment, and reagents were discussed. Before ‘embarking upon the actual analytical work, however, one more tool for the quality assurance of the work must be dealt with: the statistical operations necessary to control and verify the analytical procedures (Chapter 7) as well as the resulting data (Chapter 8), It was stated before that making mistakes in analytical work is unavoidable. This is the reason why @ complex system of precautions to prevent errors and traps to detect them has to be set up. An important aspect of the quality control is the detection of both random and systematic errors, This can be done by critically looking at the performance of the analysis as a whole and also of the instruments and operators involved in the job. For the detection itself as well as for the quamification of the errors, statistical treatment of data is indispensable. A multitude of different statistical tools is available, some of them simple, some complicated, and often very specific for certain purposes. In analytical work, the most important common operation is the comparison of data, oF sets of data, to quantify accuracy (bias) and precision Fortunately, with a few simple convenient statistical tools ‘most of the information needed in regular laboratory work can be obtained: the “test, the "F-test", and regression is. Therefore, examples of these will be given in the ensuing pages Clearly, statistics are a tool, not an aim, Simple inspec- tion of data, without statistical treatment, by an experi- enced and dedicated analyst may be just as useful as statistical figures on the desk of the disinterested. The value of statistics lies with organizing and simplifying data, to permit some objective estimate showing that an analysis is under control or that a change has occurred Equally important is that the results of these statistical procedures are recorded and can be retrieved 6.2. Definitions Discussing Quality Control implies the use of several terms and concepts with a specific (and sometimes con- fusing) meaning, Therefore, some of the most important concepts will be defined first. 6.21 Error Error isthe collective noun for any departure of the result from the “true” value*. Analytical errors ean be Chapter 6 BASIC STATISTICAL TOOLS There are es. damm les. and sates. (Anon) 1. Random or unpredictable deviations between replicates, quantified with the “standard deviation", 2. Systematic or predictable regular deviation from the “true” value, quantified as “mean difference” (je. the difference between the true value and the mean of replicate determinations). 3. Constant, unrelated to the concentration of the sub- stance analyzed (the analyte), 4. Proportional, i. related to the concentration of the analyte, 6.22 Accuracy ‘The "irueness” or the closeness of the analytical result to the "true" value. It is constituted by a combination of random and systematic errors (precision and bias) and cannot be quantified directly. The test result may be a mean of several values. An accurate determination produces a “true” quantitative value, ie. it is precise and fre of bias. 6.23 Precision The closeness with which results of replicate analyses of sample agree. 1 is a measure of dispersion or scattering ‘around the mean value and usually expressed in terms of standard deviation, standard error or a range (difference between the highest and the lowest result) 6.24 Bias The consistent deviation of analytical results from the “true” value caused by systematic errors in a procedure. Bias is the opposite but most used measure for “trueness” which is the agreement of the mean of analytical results with the true value, ie. excluding the contribution of randomness represented in precision. ‘There are several components contributing to bias: 1. Method bias ‘The difference between the (mean) test result obtained from a number of laboratories using the same method and ‘an accepted reference value. The method bias may depend (on the analyte level. 2. Laboratory bias The difference between the (mean) test result from a par- ticular laboratory and the accepted reference vaive. * “The “tue” value of an atribute is by nature indeterminate and often has only avery relative meaning. Particularly in soil science for several ttubutes there is no Such Uing as the te value as any value obtained ls method-dependert (e.g eaton exchange capacity). Obviously, ths does not mean that no adequate analysis serving a purpose i posible Iedoes, however, emphasie the aced forthe establishment of standard reference methods and the importance of extemal QC (sez Chapter 9} 4 3. Sample bias ‘The difference between the mean of replicate test results ofa sample and the ("true") value of the target population from which the sample was taken, In practice, for a laboratory this refers mainly to sample preparation, subsampling and weighing techniques. Whether a sample is representative for the population in the field is an ‘extremely important aspect but usually falls outside the responsibility of the laboratory (in some cases laboratories have their own field sampling personnel), ‘The relationship between these concepts can be expressed. in the following equation: x = po * 8 # ef The types of errors are illustrated in Fig. 6-1 ‘oan al rand ero) wale acer bat pri (otha ge union ees) (stomg bas urge randie cers) Fig. 6-1. Accuracy and precision in laboratory measurements (Note that the qualifications apply to the mean of fesults: in c ‘the mean is accurate but some individual results are inaccurate) 6.3 Basic Statisties In the discussions of Chapters 7 and 8 basic statistical treatment of data will be considered. Therefore, some understanding of these statistics is essential and they will briefly be discussed here. ‘The basic assumption to be made is that a set of data, obtained by repeated analysis of the same analyte in the same sample under the same conditions, has a normal oF Gaussian distribution. (When the distribution is skewed statistical treatment is more complicated). The primary parameters used are the mean (or average) and the standard deviation (see Fig. 6-2) and the main tools the Ftest, the t-test, and regression and correlation analysis, Guidelines for Quality Management in Laboratories 99.7% Fig. 6-2. A Gaussian or normal distribution. The figure shows that (approx.} 68% of the data fallin the range ¥ 5, 95% in the range ¥-¢ 2s, and 99.7% in the range ¥+ 3s. 63.1 Mean “The average of a set of m data x; ge 2 D a 6.3.2. Standard Deviation This is the most commonly used measure of the spread or dispersion of data around the mean. The standard devi- ation is defined as the square root of the variance (V), ‘The variance is defined as the sum of the squared deviati- ‘ons from the mean, divided by n-1. Operationally, there are several ways of calculation: (62) or 63) or: (64) ‘The calculation of the mean and the standard deviation can easily be done on a calculator but most conveniently fon a PC with computer programs such as dBASE, Lotus 123, Quatiro-Pro, Excel, and others, which have simple ready-to-use functions. (Warning: some programs use rather than n~ 1!) 63.3 Relative Standard Deviation Coefficient of Variation Although the standard deviation of analytical data may not vary much over limited ranges of such data, it usually Chapter 6 Basic Statistical Tools 8 depends on the magnitude of such data: the larger the figures, the larger s. Therefore, for comparison of vari- ations (e.g. precision) it is often more convenient to use the relative standard deviation (RSD) than the standard deviation itself. The RSD is expressed as a fraction, but ‘more usually as a percentage and is then called coefficient of variation (CV), Often, however, these terms are confused. RSD z Cv = Sx 100% Note. When needed (e.g. for the Fest sce Eq. 6.11) the variance can, of cours, be caleulated by squaring the standard deviation (65; 66) on 6.34 Confidence limits of a measurement ‘The more an analysis or measurement is replicated, the closer the mean ¥ of the results will approach the "true" value sof the analyte content (assuming absence of bias). A single analysis of a test sample can be regarded as literally sampling the imaginary set of a multitude of results obtained for that test sample, The uncertainty of such subsampling is expressed by wekee 68) vn where “true” value (mean of large set of replicates) = mean of subsamples = a statistical value which depends on the number of data and the required confidence (usually 95%). 5. = standard deviation of mean of subsamples n = number of subsamples (The term s/¥n is also known as the standard error of the mean.) # ' ‘The critical values for ¢ are tabulated in Appendix | (they are, therefore, here referred to as f,.). To find the appli- cable value, the number of degrees of freedom has to be established by: df= n =I (see also Section 6.4.2) Example For the determination of the clay content in the particle- size analysis, a semi-automatic pipette installation is used with a 20 mL pipette. This volume is approximate and the operation involves the opening and closing of taps. ‘Therefore, the pipette has to be calibrated, ie. both the accuracy (trueness) and precision have to be established. A tenfold measurement of the volume yielded the follow- ing set of data (in mL) 19.941 19812 19.829 19.828 19.742 19.797 19.937 19.847 19.885 19.804 ‘The mean is 19.842 mL. and the standard deviation 0.0627 ‘mL, Ascording to Appendix 1 for m= 10 is (., = 2.26 ) and using Eq, (6.8) this calibration yields: pipette volume = 19.842 + 2.26 (0.0627/V10) 19.84 + 0.04 mL. (Note that the pipette has a systematic deviation from 20 mL. as this is outside the found confidence interval. See also bias, p. 91, 92), In routine analytical work, results are usually single values obtained in batches of several test samples. No laboratory will analyze a test sample SO times to be ‘confident thatthe result is reliable. Therefore, the statisti- cal parameters have to be obtained in another way. Most usually this is done by method validation (see Chapter 7) and/or by keeping control charts, which is basically the collection of analytical results from one or more control samples in each batch (see Chapter 8). Equation (6.8) is then reduced 10 wextes 69) where y= "true" value x = single measurement 1 = applicable f, (Appendix 1) 15 = standard deviation of set of previous measurements In Appendix 1 can be seen that if the set of replicated measurements is large (say > 30), ¢ is close to 2. There fore, the (95%) confidence of the result x of a single test sample (n= 1 in Eq. 6.8) is approximated by the com- monly used and well known expression wexees 6.10) where sis the previously determined standard deviation of the large set of replicates (see also Fig. 6-2), Note: This “method-s* ots of « control sample is not a constant and may vary for different test materials, analyte levels, end with analytical conditions Running duplicates will, according to Equation (6.8), increase the confidence of the (mean) result by & factor v2 where ¥ = mean of duplicates 5 = known standard deviation of large set Similarly, triplicate analysis will increase the confidence by a factor V3, ete. Duplicates are further discussed in Section 8.3.3. ‘Thus, in summary, Equation (6.8) can be applied in various ways to determine the size of errors (confidence) in analytical work or measurements: single determinations im routine work, determinations for which no previous data exist, certain calibrations, ete. 6.3.5 Propagation of errors The final result of an analysis is often calculated from several measurements performed during the procedure (weighing, calibration, dilution, titration, instrument readings, moisture correction, etc). As was indicated in Section 62, the total error in an analytical result is an adding-up of the sub-errors made in the various steps. For daily practice, the bias and precision of the whole method are usually the most relevant parameters (obtained from 76 Guidelines for Quality Management in Laboratories validation, Chapter 7; or from control charts, Chapter 8) However, sometimes it is useful to get an insight in the contributions of the subprocedures (and then these have to be determined separately). For instance if one wants to change (part of) the method. Because the “adding-up" of errors is usually not a simple summation, this will be discussed. The main distinction to be made is between random errors (preci- sion) and systematic errors (bias) 6.3.5.1. Propagation of random errors In estimating the total random error from factors in a final calculation, the treatment of summation or subtraction of factors is different from that of multiplication or division, 1. Summation calculations If the final result x is obtained from the sum (or differen- ce) of (sub)measurements a, 6, c, ete. reatbees then the total precision is expressed by the standard deviation obtained by taking the square root of the sum of 5 (GF GF GPs. Ifa (sub)measurement has a constant multiplication factor or coefficient (such as an extra dilution), then this is included to calculate the effect of the variance concemed, eg. (26) Example ‘The Effective Cation Exchange Capacity of soils (ECEC) is obtained by summation of the exchangeable cations BCEC = Exch. (Ca + Mg + Na+K + H+ AD Standard deviations experimentally obtained for exchange- able Ca, Mg, Na, K and (H+AI on a certain sample, e.g. a control sample, are: 0.30, 0.25, 0.15, 0.15, and 0.60 ceimol /kg, respectively. The total precision is: Sica —V(0.3"+ 0.25" 0.15°+ 0.15" 0.6") 7SemolJkg. Ik can be seen that the total standard deviation is larger than the highest individual standard deviation, but (much) less than their sum. It is also clear that if one wants to reduce the total standard deviation, quantitatively the best result can be expected from reducing the largest contribution, in this ease the exchangeable acidity. 2. Multiplication calculations If the final result x is obtained from multiplication (or subtraction) of (sub)measurements according to pn anh then the total error is expressed by the standard deviation obtained by taking the square root of the sum of the individual relative standard deviations (RSD or CV, as a fraction or as percentage, see Eqs. 6.6 and 6.7): RSD, = f(RSD,F + (RSD,F + (RSD. Ifa (sub)measurement has a constant multiplication factor ‘or coefficient, then this is included to calculate the effect of the RSD concemed, €.g. (2RSD,)* Example ‘The calculation of Kjeldahl-nitrogen may be as follows ab ON = SO Mx 14x mcf where 4 = ml HCI requiced for titration sample ‘5 = ml HCI required for titration blank 5 = air-dry sample weight in gram M_ = molarity of HCL 14 = 14 « 10? « 100% (14 = atomic weight of N) moisture correction factor Note that in addition to multiplications, this calculation contains a subtraction also (often, calculations contain bboth summations and multiplications.) Firstly, the standard deviation of the titration (a-6) is determined as indicated in Section above. This is then transformed to RSD using Equations (6.5) or (6.6). Then the RSD of the other individual parameters have to be determined experimentally. The found RSDs are, for instance: distillation; 0.8%, titration: 0.5%, molarity 0.2%, sample weight: 0.2%, mcf: 0.2%. The total calculated precision is: RSDans™ V0.8" + 0.5% + 0.2? + 0.2? +02") 1.0%, Here again, the highest RSD (of distillation) dominates the total precision. In practice, the precision of the Kjeldahl method is usually considerably worse (= 2.5%) probably mainly as a result of the heterogeneity of the sample. The present example does not take that into account. It would imply that 2.5% - 1.0% = 1.5% or 3/5 of the total random error is due to sample heterogeneity (or other overlooked cause). This implies that painstaking efforts to improve subprocedures such as the titration or the preparation of standard solutions may not be very rewarding. It woutd, however, pay to improve the homogeneity of the sample, eg. by careful grinding and mixing in the preparatory stage. Nove. Sample heterogeneity is also represented in the moisture comeetion factor. However, the influence of this factor on the final result is usually very smal, 63.52 Propagation of systematic errors Systematic errors of (sub)measurements contribute directly to the total bias ofthe result since the individual parame- ters in the calculation of the final result each carry their ‘own bias. For instance, the systematic error in a balance will cause @ systematic error in the sample weight (as well a in the moisture determination). Note that some system- atic errors may cancel out, ¢.g. weighings by difference may not be affected by a biased balance. Chapter 6. Basic Statistical Tools n ‘The only way to detect or avoid systematic errors is by comparison (calibration) with independent standards and. ‘outside reference or control samples. 6.4 Statistical tests In analytical work a frequently recurring operation is the verification of performance by comparison of data. Some examples of comparisons in practice are = performance of two instruments, + performance of two methods, = performance of a procedure in different periods, + performance of two analysts or laboratories, + results obtained for a reference or control sample with the "true", "target" or “assigned” value of this sample. ‘Some of the most common and convenient statistical tools to quantify such comparisons are the F-test, the ‘tests, and regression analysis. Because the F-test and the ‘tests are the most basic tests they will be discussed first. These tests examine if two sets of normally distributed data are similar or dissimilar (belong of not belong to the same “population”) by comparing their standard deviations and means respectively. This is illustrated in Fig. 6-3. A Kathe SiS: Bape Sit Se 3% Fig. 63. Three possible cases when comparing two sets of data (=n). A. Different mean (bias), same precision; B. Same mean {no bias), diferent precision; C. Both mean and precision are different. (The fourth case, identical ses, has not been drawn). 6.4.1 Two-sided vs. One-sided test ‘These tests for comparison, for instance between methods A and B, are based on the assumption that there is no significant difference (the “null hypothesis"). In other words, when the difference is so small that a tabulated critical value of F of + is not exceeded, we can be conti- dent (usually at 95% level) that 4 and B are not different Two fundamentally different questions can be asked concerning both the comparison of the standard deviations ‘5, and 5, with the F-test, and of the means ¥, and X, with the Hest: I. are A and B different? (wo-sided test) 2. is A higher (or lower) than B? (one-sided test) This distinction has an important practical implication as statistically the probabilities for the two situations are different: the chance that 4 and B are only different (“it can go two ways") is twice as large as the chance that 4 is higher (or lower) than B ("it ean go only one way"), ‘The most common case is the two-sided (also called two- tailed) test: there are no particular reasons to expect that the means or the standard deviations of two data sets are different. An example is the routine comparison of a controt chart with the previous one (see 8.3). However, when it is expected or suspected that the mean and/or the standard deviation will go only one way, eg. after a change in an analytical procedure, the one-sided (or one- tailed) testis appropriate. In this case the probability that it goes the other way than expected is assumed to be zero and, therefore, the probability that it goes the expected way is doubled. Or, more correctly, the uncertainty in the two-way test of 5% (or the probability of 5% that the critical value is exceeded) is divided over the two tails of the Gaussian curve (see Fig. 6-2), ie, 2.5% at the end of ‘each tail beyond 2s. If we perform the one-sided test with 5% uncertainty, we actually increase this 2.5% to 5% at the end of one tail. (Note that for the whole gaussian curve, which is symmetrical, this is then equivalent to an uncertainty of 10% in two ways!) This difference in probability in the tests is expressed in the use of two tables of critical values for both F and 1. In fact, the one-sided table at 95% confidence level is equivalent to the two-sided table at 90% confidence level. {Lis emphasized thatthe one-sided testis only appropiate when ‘difference in one direction is expected or aimed at. OF course itis tempting to perform this test after the results show a clear (unexpected) effect. In fact, however, then a two times higher probability level was used in retrospect. This is underscored by {the observation that in this Way even contradictory conchisions ‘may arise: if in an experiment calculated values of Fand tare found within the range between the two-sided and one-sided values of Fy, and fy. the two-sided test indicates no significant difference, Whereas the one-sided test says that the result of 4 is significantly higher (or lower) than that of B. What actually hhappens is that in the first case the 2.5% boundary in the tail Was just not exceeded, and thes, subsequently, this 25% boundary is relaxed to 5% which is then obviously’ more easily ‘exceeded. This illustrates that statistical tests differ in strictness and that for proper interpretation of results in reports, the Statistical techniques used. including the confidence lir probability, should alssays be specified 8 Guidelines for Quality Management in Laboratories 64.2 F-test for precision Because the result of the F-test may be needed to choose between the Student's -test and the Cochran variant (see next section), the F-test is discussed first. ‘The Fotest (or Fisher's test) is a comparison of the spread of two sets of data to test ifthe sets belong to the same population, in other words if the precisions are similar oF dissimilar. ‘The test makes use of the ratio of the two variances: a Fest rl) where the larger s° must be the numerator by conven If the performances are not very different, then the esti- mates s, and s; do not differ much and their ratio (and that of their squares) should not deviate much from unity. In practice, the calculated F is compared with the appli cable F value in the F-table (also called the critical value, see Appendix 2), To read the table itis necessary to know the applicable number of degrees of freedom for s, and s, ‘These are calculated by: aon 1 dean -1 If Fz $ Fy one can conclude with 95% confidence that there is no significant difference in precision (the "null hypothesis” thats, = , is accepted). Thus, there is still a 5% chance that we draw the wrong conclusion. In certain cases more confidence may be needed, then a 99% confidence table can be used, which can be found in statistical textbooks. Example 1 (0vo-sided test) Table 6-1 gives the data sets obtained by two analysts for the cation exchange capacity (CEC) of a control sample. Using Equation (6.11) the calculated F value is 1.62. As ‘Table 6-1. CEC values (in emol/kg) of a control sample determined by two analysts 1 2 102 97 107 90 los 102 99 103 99 tos n2 na ns 94 109 92 89 98 106 102 # 1034 os ‘ og oss ” 10 10 fa = 210 we had no particular reason to expect that the analysts would perform differently, we use the F-table for the fwo- Sided test and find F.., = 4.03 (Appendix 2, df, = df, = 9) ‘This exceeds the calculated value and the null hypothesis (no difference) is accepted. It can be concluded with 95% confidence that there is no significant difference in precision between the work of Analyst 1 and 2 Example 2 (one-sided test) ‘The determination of the calcium carbonate content with the Scheibler standard method is compared with the simple and more rapid "acid-neutralization” method using ‘one and the same sample. The results are given in Table 6-2, Because of the nature of the rapid method we suspect it to produce a lower precision then obtained with the Scheibler method and we can, therefore, perform the one~ sided F-test. The applicable F,.,= 3.07 (App. 2. df; = 12, df, = 9) which is lower than F.., (183) and the null hypothesis (no difference) is rejected. It can be concluded (with 93% confidence) that for rhis one sample the precision of the rapid titration method is significantly worse than that of the Scheibler method. Table 6-2. Contents of CaCO, (in massimass %) in a soil sample determined with the Scheibler method (4) and the rapid titration method (8), 25 7 24 19 23 23 26 23 25 28 2s 28 24 16 26 19) 27 26 24 W : 24 : 22 : 26 ® 231 213 . 0.099 424 n 10 B Fog = 183 fag = 3.12 Fy = 307 = 28 (3 = Cochran’s “alternative” fy; see p. 80) 64.3 t-Tests for blas Depending on the nature of two sets of data (r,s, sampl- ing nature), the means of the sets can be compared for bias by several variants of the t-test. The following most common types will be discussed: Student's t-test for comparison of two independent sets of data with very similar standard deviations; Chapter 6 Basie Statistical Tools 9 2. the Cochran variant of the ‘test when the standard deviations of the independent sets differ significantly; 3. the paired t-test for comparison of strongly dependent sets of data Basically, for the s-tests Equation (6,8) is used but written in a different way (6.12) where 4% = mean of test results of a sample = “true” oF reference value $= standard deviation of test results n= number of test results of the sample. ‘To compare the mean of a data set with a reference value ‘normally the "two-sided table of critical values" is used (Appendix 1). The applicable number of degrees of freedom here is: y=n-1 Ifa value for ¢ calculated with Equation (6.12) does not exceed the critical value in the table, the data are taken to belong to the same population: there is no difference and the “null hypothesis” is accepted (with the applicable probability, usually 95% As with the F-test, when it is expected or suspected that the obtained results are higher or lower than that of the reference value, the one-sided s-test can be performed: if f1> tus then the results are significantly higher (or lower) than the reference value. More commonly, however, the “true” value of proper reference samples is accompanied by the associated standard deviation and number of replicates used to determine these parameters. We can then apply the more general case of comparing the means of two data sets’ the "true" value in Equation (6.12) is then replaced by the ‘mean of a second data set. As is shown in Fig, 6-3, to test if two data sets belong to the same population it is tested if the two Gauss curves do sufficiently overlap. In other words. if the difference between the means ¥,~¥, is small. This is discussed next. Similarity or non-similarity of standard deviations ‘When using the F-test for two small sets of data (n, and/or 1ny<30), a choice of the type of test must be made depending on the similarity (or non-similarity) of the standard deviations of the two sets. If the standard de ations are sufficiently similar they can be "pooled” and the Student ¢-test can be used. When the standard devi- ations are not sufficiently similar an alternative procedure for the ‘test must be followed in which the standard deviations are not pooled. A convenient altemative is the Cochran variant of the ‘test, The criterion for the choice is the passing or non-passing of the F-test (see 6.4.2), that is, if the variances do or do not significantly differ. ‘Therefore, for small data sets, the F-test should precede the test. For dealing with large data sets (,, m; 2 30) the “normal” ‘-test is used (see Section 6.43.3 and App. 3). 64.3.1. Student's ttest (To be applied to small data sets (n, 1, < 30) where s, and s, are similar according to F-test. ‘When comparing two sets of data, Equation (6.12) is rewritten as 6.13) where mean of data set 1 ¥, ~ mean of data set 2 ‘pooled” standard deviation of the sets number of data in set | number of data in set 2 The pooled standard deviation , is calculated by 6.14) where 5, = standard deviation of data set 1 8; = standard deviation of data set 2 nm, = number of data in set 1 ‘ny = number of data in set 2. To perform the test, the critical 4,5 has to be found in the table (Appendix 1); the applicable number of degrees of freedom df is here calculated by df=, +n; Example The two data sets of Table 6-) can be used: With Equa- tions (6.13) and (6.14) fy is calculated as 1.12 which is lower than the critical value ts of 2.10 (App. 1, df= 18, two-sided), hence the null hypothesis (no difference) is accepted and the two data sets are assumed to belong to the same population: there is no significant difference between the mean results of the two analysts (with 95% confidence). ‘Note. Another illustrative way to perforin this test for bias isto calculate if the difference between the means falls within. or outside the range where this difference is still not significantly large. In other words. if this difference is less than the least significant diflerence (isd). This can be derived fom Equation (613) wt hat + G15) In the present example of Table 6-1, the caleulation yields ded 0,69. The measured difference between the means is 10.34 - 9.97 0.37 which is smaller than the lsd indicating that there is no significant difference etween the performance of the analysts {In addition, in this approach the 95% confidence limits ofthe difference between the means can be ealculated (ef Equation 68) confidence limits = 0.37 + 0.69 = -0.32 and 1.06 Note that the value 0 for the difference is situated within this confidence interval which agrees with the null hypothesis of x, 1, (t0 difference) having been accepted. bd = |B -¥ Guidelines for Quality Management in Laboratories 643.2 Cochran's t-test ‘To be applied to small data sets (n,, 1, < 30) where 5, and s, ate dissimilar according to F-test Calculate ¢ with: (6.16) 6.17) where 1) = tus at m1 degrees of freedom t © fog at nl degrees of freedom Now the t-test can be performed as usual: if ($4 then the null hypothesis that the means do not significantly differ is accepted Example The two data sets of Table 6-2 (p. 78) can be used. According to the F-test, the standard deviations differ significantly so that the Cochran variant must be used. Furthermore, in contrast to our expectation that the precision of the rapid test would be inferior, we have no idea about the bias and therefore the two-sided test is appropriate, The calculations yield 1. = 3.12 and 425~ 2.18 meaning that ¢,y exceeds /,35 which implies that the null hypothesis (no difference) is rejected and that the ‘mean of the rapid analysis deviates significantly from that of the standard analysis (with 95% confidence, and for this sample only). Further investigation of the rapid ‘method would have to include the use of more different samples and then comparison with the one-sided test ‘would be justified (see 6.4.3.4, Example 1), 643.3 ‘Test for large data sets. (n = 30) In the example above (6.4.3.2) the conclusion happens to hhave been the same if the Student's ‘test with pooled standard deviations had been used. This is caused by the fact that the difference in result of the Student and ‘Cochran variants of the ‘testis largest when small sets of data are compared, and decreases with increasing number of data. Namely, with increasing number of data a better estimate of the real distribution of the population is ‘obtained (the flatter s-distribution converges then to the standardized normal distribution), When 11230 for both sets, e.g. when comparing Control Charts (see 8.3), for all practical purposes the difference between the Student and Cochran variant is negligible. The procedure is then red- ced to the “normal” t-test by simply calculating 1... with Eq. (6.16) and comparing this with 4, at df=n,+A,~2. (Note in App. | that the two-sided 1, is now close to 2). The proper choice of the /-test as discussed above is summarized in a flow diagram in Appendix 3 6434 Paired test ‘When two data sets are not independent, the paired t-test ‘can be a better too! for comparison than the “normal” test described in the previous sections. This is for instance the case when two methods are compared by the same analyst using the same sample(s). It could, in fact, also be applied to the example of Table 6-1 if the two analysts used the same analytical method at (about) the same time. As stated previously, comparison of two methods using, different levels of analyte gives more validation informa- tion about the methods than using only one level. Com- parison of results at each level could be done by the F and r tests as described above. The paired t-test, however, allows for different levels provided the concentration range is not too wide. As a rule of fist, the range of results should be within the same magnitude. If the analysis covers a longer range, ic. several powers of ten, regression analy’sis must be considered (see Section 6.4.4) In intermediate cases, either technique may be chosen. ‘The null hypothesis is that there is no difference between the data sets, so the test is to see if the mean of the differences between the data deviates significantly from zero oF not (two-sided test). If it is expected that one set is systematically higher (or lower) than the other set, then the one-sided testis appropriate. Example 1 ‘The "promising" rapid single-extraction method for the determination of the cation exchange capacity of soils using the silver thiourea complex (AgTU, buffered at pH 7) was compared with the traditional ammonium acetate method (NH,OAc, pH 7). Although for certain soil types the difference in results appeared insignificant, for other types differences seemed larger. Such a suspect group were soils with ferralic (oxic) properties (ie. highly weathered sesquioxide-rich soils). In Table 6-3 the results of ten soils with these properties are grouped to test if the ‘Table 6-3. CEC values (in emol/kg) obtained by the NH,OAc and AgTU methods (both at pH 7) for ten soils with ferric properties Sample NILOAC_ARTU a L mM 65 ~06 2 46 36 +10 3 106 4s 439 4 23 56 83 s 252 Bs -14 6 44 10s +60 7 18 84 106 8 27 3s 128 9 43 192 “49 0 86 180 +4 d = +219 ta = 2.89 f= 2395 ty = 226 Chapter 6. Basic Statistical Tools CEC methods give different results, The difference d within each pair and the parameters needed for the paired est are given also. Using Equation (6.12) and noting that j, - 0 (hypo- thesis value of the differences, ie. no difference), the ¢ value ean be calculated as: - ldo. 4 219-0 2395//10° = 289 where d= mean of differences within each pair of data 5, = standard deviation of the mean of differences n= number of pairs of data The calculated # value (=2.89) exceeds the critical value of 1.83 (App. 1, df=n -1 = 9, one-sided), hence the null ‘hypothesis that the methods do not differ is rejected and it is concluded that the silver thiourea method gives significantly higher results as compared with the ammon- ium acetate method when applied to such highly weathered soils ‘Nove, Since such datasets donot have a normal distribution, the “normal” fest which compares means of sets cannot be used here (the means do not constitute a fair representation of the sets). For the same reason no information about the precision of the two methods can be obtained, nor can the F-test be applied, For information about precision, replicate determinations are needed Example 2 Table 6-4 shows the data of total-P in four plant tissue samples obtained by a laboratory Land the median values obtained by 123 laboratories in a proficiency (round- robin) test ‘Table 64. Tota-P coments (in mmolkg) of plant tissue as determined by 123 laboratories (Median) and Laboratory 1. ample Median Lab L d 1 73.0 852 “78 2 201 24 2 3 789 Bas 56 4 15 1 10 d= 170 tag = 120 5, = 12702 ty = 318 To verify the performance of the laboratory a paired f-test can be performed Using Eq. (6.12) and noting that 4,~ 0 (hypothesis value of the differences, i.e. no difference), the ¢ value can be calculated as Ia-o] . 77 sa 12.778 ‘The calculated s-value is below the critical value of 3.18 (Appendix 1, df= n-1 = 3, two-sided), hence the null hypothesis that the laboratory does not significantly differ from the group of laboratories is accepted, and the results of Laboratory £ seem to agree with those of “the rest of the world” (this is a so-called third-tine control). 64.4 Linear carrelation and regression ‘These also belong to the most common useful statistical tools to compare effects and performances and Y Although the technique is in principle the same for both, there is a fundamental difference in concept: correlation analysis is applied to independent factors: if increases, ‘what will ¥ do (increase, decrease, or perhaps not change at all)? In regression analysis @ unilateral response is assumed: changes in ¥ result in changes in ¥, but changes in ¥ do not result in changes in X. For example, in analytical work, correlation analysis can be used for comparing methods or laboratories, Whereas regression analysis can be used to construct calibration graphs. In practice, however, comparison of laboratories or methods is usually also done by regression analysis. The calculations can be performed on a (pro- grammed) calculator or more conveniently on a PC using 4 home-made program. Even more convenient are the regression programs included in statistical packages such as Statstis, Mathcad, Eureka, Genstat, Stateal, SPSS, and others. Also, most spreadsheet programs such as Lotus 123, Excel, and Quattro-Pro have functions for this. Laboratories or methods are in fact independent factors. However, for regression analysis one factor has to be the independent or “constant” factor (ez, the reference ‘method, or the factor with the smallest standard devia- tion). This factor is by convention designated , whereas the other factor is then the dependent factor ¥ (thus, we speak of "regression of ¥ on. ‘As was discussed in Section 6.4.3, such comparisons can often been done with the Student/Cochran or paired ‘ests. However, correlation analysis is indicated: 1, When the concentration range is so wide that the ‘errors, both random and systematic, are not indepen- dent (which is the assumption for the t-tests). This is ‘often the case where concentration ranges of several magnitudes are involved. 2. When pairing is inappropriate for other reasons, notably a long time span between the two analyses (sample aging, change in laboratory conditions, etc.) ‘The principle is to establish a statistical linear relationship between two sets of corresponding data by fitting the data to a straight line by means of the "least squares" tech- nique, Such data are, for example, analytical results of ‘two methods applied to the same samples (correlation), or the response of an instrument to a series of standard solutions (regression). Note: Naturally, non-linear higher-order relationships are also possible, but since these ae less common in analytical work and more complex to handle mathemabcally, they will not be discussed here. Nevertheless, o avoid misiterpretation, always inspect the kind of relationship by plotting the data, cither on paper or on the computer monitor 8 Guidelines for Quality Management in Laboratories The resulting line takes the general form: (6.18) yrbrta where a = intercept of the line with the y-axis 6 = slope (tangent) In laboratory work ideally, when there is perfect positive correlation without bias, the intercept a = 0 and the slope 1. This is the so-called "1:1 line” passing through the origin (dashed line in Fig. 6-5). If the intercept a w 0 then there is a systematic discrep- ancy (bias, error) between X’and ¥; when 6 + 1 then there isa proportional response or difference between X and Y. ‘The correlation between X and ¥ is expressed by the correlation coefficient r which can be calculated with the following equation: (6.19) It can be shown that r can vary from 1 to -1 1+ perfect positive linear correlation r= 0 year correlation (maybe other correlation) =I: perfect negative linear correlation Often, the correlation coefficient r is expressed as r?: the coefficient of determination or coefficient of variance. The advantage of r? is that, when multiplied by 100, it indicates the percentage of variation in ¥ associated with variation in X. Thus, for example, when r= 0.71 about ‘50% (r? = 0.504) of the variation in Y is due to the vari- ation in X. The line parameters and a are calculated with the following equations Be-DO; Le-¥F be (620) and. a 21) It is worth to note that r is independent of the choice which factor is the independent factor .Y and which is the dependent ¥. However, the regression parameters a and b do depend on this choice as the regression lines will be different (except when there is ideal 1:1 correlation), 644.1 Construction of calibration graph ‘AS an example, we take a standard series of P (0-1.0 mg/L) for the spectrophotometric determination of phosphate in a Bray-I extract (available P"), reading in absorbance units. The data and calculated terms needed to determine the parameters of the calibration graph are given in Table 6-5. The line itself is plotted in Fig. 6-4 Table 6-5 is presented here to give an insight in the steps and terms involved. The calculation of the correla- tion coefficient r with Equation (6.19) yields a value of 0.997 (7? = 0.995). Such high values are common for calibration graphs. When the value is not close to 1 (say, below 0.98) this must be taken as a waming and it might then be advisable to repeat or review the procedure Errors may have been made (e.g. in pipetting) or the used range of the graph may not be linear. On the other hand, a high r may be misleading as it does not necessarily indicate linearity. Therefore, to verify this, the calibration graph should always be plotted, either on paper or on computer monitor. Using Equations (6.20 and (6.21) we obtain: 2438 - 0.626 070 and. a= 0380 - 0.313 = 0.037 ‘Thus, the equation of the calibration line is: y= 0,626 + 0.037 (622) Table 6-5. Parameters of calibration graph in Fig. 6-4 Chapter 6. Basic Statistical Tools absorbance a 02k P concentration (mg/L) Fig. 64. Calibration graph plotted from data of Table 6-8, The dashed lines delineate the 95% confidence area of the ‘graph. Note thatthe confidence is highest at the centroid of the graph, During calculation, the maximum number of decimals is used, rounding off to the last significant figure is done at the end (see instruction for rounding off in Section 8.2). Once the calibration graph is established, its use is simple: for each y value measured the corresponding. concentration x can be determined either by direct reading or by calculation using Equation (6.22). The use of calibration graphs is further discussed in Section 7.2.2. Note, A treatise of the error or uncertainty inthe regression line is given on p. 84, 644.2 Comparing two sets of data using many samples at different analyte levels Although regression analysis assumes that one factor {on the x-axis) is constant, when certain conditions are met the technique can also successfully be applied to comparing two variables such as laboratories or methods, These conditions are: ~ The most precise data set is plotted on the x-axis = At least 6, but preferably more than 10. different samples are analyzed ~ The samples should rather uniformly cover the analyte level range of interest. ‘To decide which laboratory or method is the most precise, ‘multi-replicate results have to be used to calculate stan- dard deviations (see 6.4.2). If these are not available then the standard deviations of the present sets could be com- ppared (note that we are now not dealing with normally distributed sets of replicate results). Another convenient way is to run the regression analysis on the computer, reverse the variables and run the analysis again. Observe which variable has the lowest standard deviation (or standard error of the intercept a, both given by the computer) and then use the results of the regression analysis where this variable was plotted on the x-axis If the analyte level range is incomplete, one might have to resort to spiking or standard additions, with the inherent drawback that the original analyte-sample combination may not adequately be reflected Example In the framework of a performance verification pro- ‘gramme, a large number of soil samples were analyzed by two laboratories A and Y (@ form of “third-line control”, see Chapter 9) and the data compared by regression. (In this particular case, the paired ‘test might have been considered also). The regression line of a common attribute, the pH, is shown here as an illustration. Figure ‘6-5 shows the so-called "scatter plot” of 124 soil pH-H;0 determinations by the two laboratories. The correlation co- efficient r is 0.97 which is very satisfactory. The slope b (© 1.03) indicates that the regression line is only slightly steeper than the 1:1 ideal regression line. Very disturbing, however, is the intercept a of ~1.18. This implies that laboratory ¥ measures the pH more than a whole unit lower than laboratory .V at the low end of the pH range (the intercept =1.18 is at ply = 0) which difference decreases to about 0.8 unit at the high end. Laboratory X Fig. 6-5, Scatter plot of pHi data of two laboratories. Drawn line: regression fine: dashed line: 1: ideal regression line ‘The ‘test for significance is as follows: For intercept a: 4, = 0 (nuill hypothesis: no bias; ideal intercept is then zero), standard error = 0.14 (calculated bby the computer), and using Equation (6.12) we obtain tog = [(ELAB ~ 0)[/0.14 4 Here, ty = 1.98 (App. 1, tworsided, df {n-2 because an extra degree of freedom is lost as the data are used for both @ and 6) hence, the laboratories have a significant mutual bias, For slope 6: js, = | (ideal slope: null hypothesis is no difference), standard error = 0.02 (given by computer), and again using Equation (6,12) we obtain: n-2= 122 to = [CL ~ 1.03) | 10.02 = 15. ‘Again, ty = 1.98 (App. 1; two-sided, df= 122), hence, the difference between the laboratories is not significantly proportional (or: the laboratories do not have a significant difference in sensitivity), These results suggest that in spite of the good correlation, the two laboratories would have to look into the cause of the bias. Note. In the present example, the seattering ofthe points around the regression line does not seem to change much over the Whole range. This indicates that the precision of laboratory Y does not change very much over the range with respect 10 Iabontany This is not aways the. case, In such eases weighted regression (not discussed here) is more eppropriate than the unweighted regression as used here ‘Validation of a method (see Section 7-5) may reveal that precision can change significantly withthe level of analyte (and, With other factors such as sample matrix) Guidelines for Quality Management in Laboratories 645. Analysis of variance (ANOVA) ‘When results of laboratories or methods are compared where more than one factor can be of influence and must be distinguished from random effects, then ANOVA is a powerful statistical tool to be used, Examples of such factors are: different analysts, samples with different pre- treatments, different analyte levels, different methods within one of the laboratories). Most statistical packages for the PC can perform this analysis. As a treatise of ANOVA is beyond the scope of the present Guidelines, for further discussion the reader is referred to statistical textbooks, some of which are given in the list of Literature Error or uncertainty in the regression line The “fiting” of the calibration graph is necessary because the response points y, composing the line do not fall exactly fon the line. Hence, random errors are implied. This is ‘expressed by an unceriainty about the slope and intercept 5 and « defining the line. A quantification can be found in the standard deviation of these parameters. Most computer programmes for regression will automaticaly produce figures for these. To illustrate the procedure, the example of | the calibration graph in Section 6.4.3.1 is elaborated here ‘A practical quantification of the uncertainty is obtained by lating the standard deviation of the points on the line: the "residual standard deviation” or “standard error of the estimate”, which we assumed to be constant (But which is ‘only approximately so, see Fig. 6-4) sw | SOR ® ” where 3 “fited” y-value for each +, (read fom graph or calculated with Eq. 622). Thus, 9, is the (vertical) deviation ofthe found y-values from the line. ‘number of cakbeation points 623) Note: Only the y-deviations of the points ftom the line are ‘considered, it is assumed that deviations in the x-direction fre negligible. This is, of course, only the case if the standards are very accutately prepared, Now the standard deviations forthe intercept a and slope 6 can be calculated with (624) and (625) To make this procedure clear, the parameters involved are listed in Table 66. The tncertainty about the regression lin is expressed by the ‘confidence limits of a and b according to Eq. (69) ats, and b# ts, Table 6-6. Parameters for calculating errors due to calibration graph (use also figures of Table 6-5) ee 8 ed Ca 0 00s | o0sr amis. 0.0002 02 014 | 0162-0022 o.a00s 04 029 | 0287 003 10.0000 05 043 | 0413 oT 0.0008, os 0s2 | 0538 — -0.018 noo to 067 | 056s 0.007 0.0001 0.001364 In the present example, using Eq. (6.23), we calevlate « « | goons = 00183 ‘and, using Fg. (6.24) and Table 6-5: s,= 00183] 22_ s go1s2 . m7 ‘and, using Eq, (6.28) and Table 6-5: oo1ss 0.70 The applicable tis 2.78 (App. 1, ewo-sided, df = 4) hence, using Fg. (69) 0.037 + 2.78 x 0.0132 = 0.037 + 0.037 = 00219 n-2 and 5 © 0.626 + 2.78 = 0.0219 = 0.626 + 0.061 Note that ifs, is large enough, @ negative value for « is possible, ic. a negative reading for the blank or zer0- ‘standard, (For a discussion about the error in x resulting from a reading in y, which is panicularly relevant for reading a calibration graph, see Section 72:3) The uncertainty about the line is somewhat decreased by using more calibration points (assuming s, has not increased): one more point reduces /,. from 278 t0 2.57 (Gee Appendix 1). 8s Chapter 7 QUALITY OF ANALYTICAL PROCEDURES 7.1 Introduction In this chapter the actual execution of the jobs for which the laboratory is intended, is dealt with. The most import- ant part of this work is of course the analytical procedures meticulously performed according to the corresponding SOPs, Relevant aspects include calibration, use of blanks, performance characteristics of the procedure, and report- ing of results. An aspect of utmost importance of quality management, the quality control by inspection of the results, is discussed separately in Chapter 8. All activities associated with these aspects are aimed at ‘one target: the production of reliable data with a mini- mum of erors. In addition, it must be ensured that reliable data are produced consistently. To achieve this an ‘appropriate programme of quality control (OC) must be implemented. Quality control is the term used to describe the practical steps undertaken to ensure that errors in the analytical data are of @ magnitude appropriate for the use to which the data will be put. This implies that the errors (which are unavoidably made) have to be quantified to enable a decision whether they are of an acceptable ‘magnitude, and that unacceptable errors are discovered so that corrective action can be taken. Clearly, quality control must detect both random and systematic errors. ‘The procedures for QC primarily monitor the accuracy of the work by checking the bias of data with the help of (certified) reference samples and control samples and the precision by means of replicate analyses of test samples as, well as of reference and/or control samples. 7.2 Calibration graphs 7.21 Principle Here, the construction and use of calibration graphs or curves in daily practice of a laboratory will be discussed. Calibration of instruments (including adjustment) in the present context are also referred to as standardization. The confusion about these terms is mainly semantic and the terms calibration curve and standard curve are generally used interchangeably (for definitions see also p. 49). The term "curve" implies thatthe line is not straight. However, the best (parts of) calibration lines are linear and, there fore, the general term “graph” is preferred. For many measuring techniques calibration graphs have to be constructed. The technique is simple and consists of plotting the instrument response against a series of ‘samples with known concentrations of the analyte (stan- dards). In practice, these standards are usually pure ‘chemicals dispersed in a matrix corresponding with that of the test samples (the “unknowns"). By convention, the calibration graph is always plotted with the concentration of the standards on the x-axis and the reading of the instrument response on the axis. The unknowns are determined by interpolation, not by extrapolation, so that 4 suitable working range for the standards must be selected. In addition, in the present discussion it is assumed that the working range is limited to the linear range of the calibration graphs and that the standard deviation does not change over the range (neither of which is always the case* and that data are normally distributed, Non-linear graphs can sometimes be linearized in a simple way, eg. by using a log scale (in potentio- metry), but usually imply statistical problems (polynomial regression) for which the reader is referred to the relevant literature, It should be mentioned, however, that in modem instruments which make and use calibration graphs automatically these aspects sometimes go by unnoticed. Some common practices to obtain calibration graphs are 1. The standards are made in a solution with the same composition as the extractant used for the samples (vith the same dilution factor) so that all measurements are done in the same matri ‘This technique is often practised when analyzing many batches where the same standards are used for some time. In this way an incorrectly prepared extractant oF matrix may be detected (in blank or control sample) 2. The standards are made in the blank extract. A disad- vantage of this technique is that for each batch the standards have to be pipetted. Therefore, this type of calibration is sometimes favoured when only one oF few batches are analyzed or when the extractant is unstable. A seeming advantage is that the blank can be forced to zero. However, an incorrect extractant would then more easily go by undetected. The disadvantage of pipetting does not apply in case of automatic dispen- sing of reagents when equal volumes of different concentration are added (e.g. with flow-injection), 3. Less common, but useful in special cases is the so- called standard additions technique. This can be practised when a matrix mismatch between samples and standards needs to be avoided: the standards are prepared from actual samples. The general procedure is to take a number of aliquots of sample or extract, add different quantities of the analyte to each aliquot (spiking) and dilute to the final volume. One aliquot is used without the addition of the analyte (blank). Thus, a standard series is obtained. + The i the so-called “unweighted egression lie. Because normally the standard deviation s ot constant cver the concentration rane (i Usually least in the muddle range), this difference ia ertc should be taken tno account. This would the yield a “Wwelgined regression line ‘The calculation ofthis iy more complicated and information about the standard deviation of the feedings has to be obtained. ‘The gain in precision is usually very hited, But sometimes the extra information bout the cre aye sell Guidelines for Quality Management in Laboratories If calibration is involved in an analytical procedure, the SOP for this should include a description of the calibra- tion sub-procedure. If applicable, including an optimaliza- tion procedure (usually given in the instruction manual). 7.2.2 Construction and use In several laboratories calibration graphs for some analy- ses are still adequately plotted manually and the straight line (or sometimes a curved line) is drawn with a visual “best fit" e.g. for flame atomic emission spectrometry, ot colorimetry. However, this practice is only legitimate when the random errors in the measurements of the standards are small; when the scattering is appreciable the line-fitting becomes subjective and unreliable. Therefore, if a calibration graph is not made automatically by a microprocessor of the instrument, the following more objective and also quantitatively more informative pro- ‘cedure is generally favoured. ‘The proper way of constructing the graph is essentially the performance of a regression analysis i, the statistical establishment of a linear relationship between concentra- tion of the analyte and the instrument response using. at least six points. This regression analysis (of reading y on concentration x) yields a correlation coefficient r as. a measure for the fit of the points to a straight line (by means of Least Squares). Warning, Some instrments can be calibrated with only one oF two standards. Linearity i then implied but may not necessarily bbe true, It is usefil to check this with more standards Regression analysis was introduced in Section 6.4.4 and the construction of a calibration graph was given on p. 82 as an example. The same example is taken up here (and repeated in part) but focused somewhat more on the application. ‘We saw that a Linear calibration graph takes the general form: yrovta (6.18; 7.1) Where: a intercept of the line with the y-axis (6 = slope (tangent) Ideally, the intercept a is zero. Namely, when the analyte is absent no response of the instrument is to be expected. However, because of interactions, interferences, noise, contaminations and other sources of bias, this is seldom the case. Therefore, a can be considered as the signal of the Blank of the standard series The slope b is a measure for the sensitivity of the pro- cedure; the steeper the slope, the more sensitive the procedure, or: the stronger the instrument response on y- to a concentration change on x (see also Section 7.5.3). ‘The correlation coefficient r can be calculated by: (6.19; 72) where concentrations of standards ‘mean of concentrations of standards instrument response to standards = mean of instrument responses to standards ‘The line parameters 6 and a are calculated with the following equations: be Ee-DOr) (620; 73) a and (621; 74) Example of ealibration graph ‘As an example, we take the same calibration graph as discussed in Section 64.4.1, Fig. 6-4, p. 82): a standard series of P (0-1.0 mg/L) for the spectrophotometric determination of phosphate in a Bray-I extract (‘available P*, reading in absorbance units. The data and calculated terms needed to determine the parameters of the calibra- tion graph were given in Table 6-5. The calculations can be done on a (programmed) calculator or more conveni- ently on a PC using a home-made program or, even more conveniently, using an available regression program. The calculations yield the equation of the calibration line (plotted in Fig. 7-1); y= 0.626x + 0.037 (6.22; 7.5) with a correlation coefficient r ~ 0.997 . As stated previ- ously (6.4.3.1), such high values are common for calibra- tion graphs. When the value is not close to I (say, below (0.98) this must be taken as a warning and it might then be advisable to repeat or review the procedure, Errors may have been made (eg. in pipetting) or the used range of the graph may not be linear. Therefore, to make sure, the calibration graph should abways be plotted, either on paper or on computer monitor. 06 absorbance a eo 02k ‘0 P concentration (mg/L) Fig. 7-1. Calibration graph plotted from data of Table 6-5. If linearity is in doubt the following test may be applied. Determine for two or three of the /highest calibration points the relative deviation of the measured y-value from the calculated line: deviation (%) = | DI] «100% (7.6) ty +a = If the deviations are < $% the curve can be accepted as linear. = If a deviation > 5% then the range is decreased by dropping the highest concentration, Chapter 7. Quality of Analytical Procedures ~ Recalculate the calibration line by linear regression. ~ Repeat this test procedure until the deviations < 5% When, as an exercise, this testis applied to the calibration curve of Fig. 7-1 (daia in Table 6-3) it appears thatthe deviations of ihe hee highest pins are = 3%, hen the ne suc During calculation of the line, the maximum number of decimals is used, rounding off tothe last significant figure is done at the end (see instruction for rounding off in Section 8.2) Once the calibration graph is established, its use is imple: for each y value measured for a test sample (the “unknown”) the corresponding concentration x can be determined cither by reading from the graph or by calculation using Equation (7.1), or x is automatically produced by the instrument. 7.2.3 Error due 10 the regression line of the calibration graph is necessary because the actual response points y, composing the fine usually do not fall exactly on the line. Hence, random errors are implied. This is expressed by an uncertainty about the slope and intercept & and a defining the graph. A dis- cussion of this uncertainty is given on p. 84. It was explained there that the error is expressed by s,, the "standard error of the y-estimate” (see Eq. 6.23. a para- meter automatically calculated by most regression com- puter programs. This uncertainty about the values (the fitted y-values) is transferred to the corresponding concentrations of the unknowns on the x-axis by the calculation using Eq. (7.1) and can be expressed by the standard deviation of the obtained x-value, The exact calculation is rather complex but a workable approximation can be calculated with: aay Example For each value of the standards x the corresponding is calculated with Eq, (7.5): 3, ~ 0.6261, + 0.037 78) Then, 5, is calculated using Eq. (6.23) o by computer: 0.00134 6 0.0183, ‘Then, using Eq. (7.7): 0.0183 0.029 0.626 Now, the confidence limits of the found results x, can be caleulated with Eq. (6.9): 79) For a two-sided interval and 95% confidence: f, = 2.78 (see Appendix 1, df= n -2=4), Hence all results in thi ‘example can be expressed as: xy £18, 87 x, £0.08 mg/L ‘Thus, for instance, the result of a reading y = 0.22 and using Eq, (7.5) to calculate x, = 0.29, can be reported as 0.29 £0.08 mg/L. (See also Note 2 below.) The used s, value can only be approximate as itis taken constant here whereas in reality this is usually not the case, Yet, in practice, such an approximate estimation of the error may suffice, The general rule is that the ‘measured signal is most precise (least standard deviation) near the centroid of the calibration graph (see Fig.6-4). ‘The confidence fimits can be narrowed by increasing the number of calibration points. Therefore, the reverse is also true: with fewer calibration points the confidence limits of the measurements become wider, Sometimes only two or three points are used. This then usually concerns the checking and restoring of previously established calibra- tion graphs including those in the microprocessor or computer of instruments. In such eases it is advisable to check the graph regularly with more standards. Make a record of this in the file or journal of the method, [Note J. Where the determination of the analyte is part of a procedure with several steps, the error in precision doe to this feading is added to the errors of the other steps and as such included in the total precision error of the whole procedure. The Inter is the most useful practical estinate of confidence when reponing results. As discussed in Section 634 (p. 75) 0 Convenient way to do this Is By using Equations (68) oF (6.9) With the mean and standard deviation obiained from several replicate determinations (n> 10) caried out on eonol samples Of if available aken fom the control charts (sce 83.2: Control Chart of the Mean). Most generally, the 95% confidence for Single Values x of test samples is expressed by Equation (6.10) x42s (6.10: 7.10) “where ss the standard deviation ofthe mentioned large number of replicate determinations Note 2. The confidence interval of +0.08 mg/l. in the present ‘example is clearly not satisfactory and calls for inspection ofthe procedure, Particularly the blank seems to be (much) t00 high This illustrates the usefulness of plotting the graph and calculat- ing the parameters. Other traps to catch this error arethe Conteol ‘Char of the Blank and, of course, the technician’s experience, 7.2.4 Independent standards It cannot be overemphasized that for OC a calibration should always include measurement of an independent standard or calibration verification standard at about the ‘middle of the calibration range. If the result of this measurement deviates alarmingly from the correct or expected value (say > $%), then inspection is indicated. Such an independent standard can be obtained in several ways. Most usually it is prepared from pure ‘chemicals by another person than the one who prepared the actual standards. Obviously, it should never be derived from the same stock or source as the actual standards. If necessary, a bottle from another laboratory could be borrowed, In addition, when new standards are prepared, the remainder of the old ones always have to be measured as ‘a mutual check (include this in the SOP for the prepara- tion of standards!). Guidelines for Quality Management in Laboratories 725 Measuring a batch Afier calibration of the instrument for the analyte, a batch of test samples is measured. Ideally, the response of the instrument should not change during measurement (drift ‘or shift). In practice this is usually the case for only a limited period of time or number of measurements and regular recalibration is necessary. The frequency of recalibration during measurement varies widely depending, ontechnique, instrument, analyte, solvent, temperature and humidity. In general, emission and atomizing techniques (AAS, ICP) are mote sensitive to drift (or even sudden shift: by clogging) than colorimetric techniques. Also, the techniques of recalibration and possible subsequent action vary widely, The following two types are commonly practised. 1. Step-wise correction ot interval correction After calibration, at fixed places o intervals (after every 10, 15, 20, or more, test samples) a standard is measured. For this, often a standard near the middle of the working. range is used (continuing calibration standard). When the drift is within acceptable limits, the measurement is ‘continued. If the drift is unacceptable, the instrument is recalibrated ("resloped") and the previous interval of samples remeasured before continuing with the next interval. The extent of the “acceptable” drift depends on the kind of analysis but in soil and plant analysis usually does not exceed 5%. This procedure is very suitable for manual operation of measurements. When automatic sample changers are used, various options for recalibration and repeating intervals or whole batches are possible 2. Linear correction or correction by interpolation Here, too, standards are measured at intervals, usually together with a blank ("drift and wash") and possible changes are processed by the computer software which converts the past readings of the batch to the original calibration, Only in case of serious mishap are batches oF intervals repeated. A disadvantage of this procedure is that drift is taken to be linear whereas this may not be so. Autoanalyzers, ICP_and AAS with automatic sample changers often employ variants of this type of procedure. ‘At present, the development of instrument software experiences a mushroom growth. Many new fancy features with respect to resloping, correction of carry over, post-batch dilution and repeating, are being intro- duced by manufacturers. Running ahead of this, many laboratories have developed their own interface software programs meeting their individual demands, 7.3. Blanks and Detection limit 7.3.1 Blanks A blank or blank determination is an analysis of a sample ‘without the analyte or attribute, or an analysis without a sample, i.e. going through all steps of the procedure with the reagents only. The latter type is the most common as samples without the analyte or attribute are often not available or do not exist ‘Another type of blank is the one used for calibration of instruments as discussed in the previous sections. Thus, we may have two types of blank within one analytical method or system: = a blank for the whole method or system and = ablank for analytical subprocedures (measurements) as part of the whole procedure or system. For instance, in the eation exchange eapecity (CEC) determina- tion of soils with the percolation method, two method or system blanks are included in each batch: two percolation tubes with cotton woo! or filter pulp and sandr celite, but without sample. For the determination of the index cation (NH, by colorimety ‘or Na by flame emission spectroscopy). blank is included inthe determination of the calibration graph. {f NH is determined by distillation and subsequent titration, a blank tration is carted cout for comection of test sample readings. ‘The proper analysis of blanks is very important because: Ltn many analyses sample results are calculated by subtracting blank readings from sample readings. 2. Blank readings can be excellent monitors in quality control of reagents, analytical processes, and profi- ciency. 3. They can be used to estimate several types of method detection limits For blanks the same rule applies as for replicate analyses: the larger the number, the greater the confidence in the ‘mean. The widely accepted rule in routine analysis is that each batch should inchide at least two blanks. For special studies where individual results are critical, more blanks per batch may be required (up to eight). For quality control, Control Charts are made of blank readings identically to those of control samples. The between-batch variability of the blank is expressed by the standard deviation calculated from the Control Chart of the Mean of Blanks, the precision can be estimated from the Control Chart of the Range of Duplicates of Blanks. The construction and use of control chants are discussed in detail in 83, One of the main control rules of the control chars, for instance, prescribes that a blank value beyond the mean blank value plus 3x the standard deviation of this mean (i.e, beyond the Aetion Limit) must be rejected and the batch be repeated, possibly with fresh reagents, In many laboratories, no control charts are made for blanks Sometines, analysts argue that there Is never a problem with my blank, the reading isalways close to zero". Admitted, some fnalyses are more prone to blank errors than others” This, however, isnot a valid argument for not kesping control chars They are made to monitor procedares and 19 alarm when these are out of control (shift) or tend to hecome oct of contr (dif). This can happen in any procedure in any laboratory at any time From the foregoing discussion it will be clear that signals, of blank analyses generally are not zero. In fact, blanks may found to be negative. This may point to an error in the procedure: e.g. for the zeroing of the instrument an incorrect or a contaminated solution was used or the calibration graph was not linear. It may also be due to the matrix of the solution (e.g. extractant), and is then often unavoidable. For convenience, some analysts practice Chapter 7. Quality of Analytical Procedures 89 “forcing the blank to zero” by adjusting the instrument. Some instruments even invite or compel analysts to do so. This is equivalent to subtracting the blank value from the values of the standards before plotting the calibration graph. From the standpoint of Quality Control this practice must be discouraged. If zeroing of the instrument is necessary, the use of pure water for this is preferred. However, such general considerations may be overruled by specific instrument or method instructions. This is becoming more and more common practice with modem sophisticated hi-tech instruments. Whatever the case, a decision on how to deal with blanks must made for each procedure and laid down in the SOP concerned. 7.3.2 Detection limit In environmental analysis and in the analysis of trace elements there is a tendency to accurately measure low contents of analytes. Modern equipment offer excellent possibilities for this, For proper judgement (validation) and selection of a procedure or instrument itis important to have information about the lower limits at which analytes can be detected or determined with sufficient confidence. Several concepts and terms are used eg, detection limit, lower limit of detection (LED), method detection limit (MDL). The latter applies to a whole method or system, whereas the two former apply to ‘measurements as part of a method Note: In analytical chemistry, “lower limit of detection" is often confused with “sensitivity” (see 7.5.3) Although various definitions can be found, the most widely accepted definition of the detection timit seems to be: ‘the concentration of the analyte giving a signal equal to the blank plus 3x the standard deviation of the blank Because in the calculation of analytical results the value of the blank is subtracted (or the blank is forced to zero) the detection limit can be written as: LLD, MDI Bx sy aay At this limit itis 93% certain that the signal is not due to the blank but that the method has detected the presence of the analyte (this does not mean that below this limit the analyte is absent!). Obviously, although generally accepted, this is an arbit- rary limit and in some cases the 7% uncertainty may be too high (for S% uncertainty the LLD = 3.35). More ‘over, the precision in that concentration range is often relatively low and the LLD must be regarded es a qual ative limit, For some purposes, therefore, a more elevated limit of determination” or "limit of quantification" (LLQ) is defined as LIQ" 2x LLD = 6% 5, for sometimes as 0.12) 1g 7.13) ‘Thus, if one needs to know or report these limits of the analysis as quality characteristics, the mean of the blanks and the corresponding standard deviation must be deter- mined (validation), The sy can be obtained by running a statistically sufficient number of blank determinations 10 sy (usually a minimum of 10, and not excluding outliers). In fact, this is an assessment of the “noise” of a determina- tion, Note: Noise is defined as the ‘difference between the macimum ‘and minimum values of the signal in the absence of the analyte ‘measured during two munutes (or otherwise according to instru- ‘ment instruction). The noise of several instrumental measure- ments can be displayed by using a reconder (e.g. FES. AAS, ICP, IR, GC, HPLC, XRFS). Although this isnot often used 0 actually determine the detection limit, itis used to determine the signal-to-noise ratio (a validation parameter not discussed here) ‘and is particularly useful to monitor noise in ease of trouble shooting (e.g. suspected power fuetuations) If the analysis concerns a one-batch exercise 4 to 8 blanks are run in this batch. If it concerns an MDL as a vali- dation characteristic of a test procedure used for multiple batches in the laboratory such as a routine analysis, the blank data are collected from different batches, e.g. the ‘means of duplicates from the control charts. For the determination of the LLD of measurements where a calibration graph is used, such replicate blank determinations are not necessary since the value of the blank as well as the standard deviation result directly from the regression analysis (see Section 7.2.3 and Example 2 below), Examples 1. Determination of the Method Detection Limit (MDL) of 4 Kjeldahi-N determination in soils Table 7-1 gives the data obtained for the blanks (means of duplicates) in 15 successive batches of a micro-Kjeldah! N determination in soil samples. Reported are the millilitres 0.01 M HCI necessary to titrate the ammonia distillate and the conversion to results in mg N by: reading % 0.01 x 14 Table 7-1, Blank data of 15 batches of a KjeldahleN determination in soils for the cakulation of the Method Limit. ml HCL mg N oz 0.0161 0.16 0.0217 oul o.o1s4 os 0.0203, 0.09 0.0126 O14 0.0189 012 0.0161 017 0.0238 od 0.0189 020 0.0273, 016 0.0217 022 0.0308, oa 0.0189 oun 0.0184 os 0.0203 Mean blank 0.0199 Bu 0.0048, MDL = 3x sy = 0.014mgN

You might also like