You are on page 1of 15

Testing 6

INTRODUCTION
Materials form the sole subject matter of this book, and so it is not
surprising that testing so often comes into the picture. So often, problems
have arisen in testing when the focus has been on the numbers or results
that a test produces, with little or no attention paid to such matters as:
• the relevance of the test to the performance required;
• the value of the test result as affected by the tester;
• misleading interpretation of test data based on selective sampling of
results;
• sanguine acceptance of complying results when there may be hazards
unaccounted for;
• omission of relevant test requirements.
The reader has probably experienced other problems, but the following
sections refer to my own ‘hands-on’ problem encounters. My hope is that
these discussions will encourage the construction team members to
question, discuss and offer suggestions before or at the tender stage. In
addition, rather than look upon testing as a built-in item overhead, its
importance and relevance to performance in practice might be better served
by the incorporation of specific bill items.

6.1 LABCRETE OR REALCRETE


The term ‘labcrete’ is often used to define concrete or mortar that is made
and tested under strict laboratory conditions, whereas ‘realcrete’ applies
to concrete made on site or in the works and used on site. The grey area
here is that of concrete samples (such as cubes) made on site and then
tested in a laboratory. The problems described here, with both laboratory-

Copyright 2003 by Taylor & Francis Group. All rights Reserved.


made and tested concrete and with site-produced samples, were in the
validity and applicability of the results obtained.
Consider, first, laboratory-made and tested concrete, taking admixtures
as an example. The standard for plasticising admixtures (BS 5075 Part 1:1982)
specifies inter alia that nominated concrete ingredients shall be used in a
certain way to produce test data that confirm or deny that the admixture
complies with the standard. The appendix of that standard (like those of the
other admixtures standards) warns the reader of the need for site trials to
assess the suitability of that admixture for the conditions on site or in the
works. However, site or works conditions will not emulate the BS
specification, and so compliance of any admixture with the standard does
not necessarily mean that it will produce the target performance in realcrete.
If samples are made on site and tested in the laboratory—a mixture of
labcrete and realcrete—the results obtained are likely to be more
meaningful than those from labcrete alone. However, the attention paid to
the manufacture of the simple and small size of a cube or prism compared
with, say, that of a column, is likely to give rise to doubts.
Probably the best way to describe these two cases is to say that the
labcrete admixtures standards give assurance of classification, coupled
with potential for use, whereas the realcrete-labcrete hybrid indicates the
maximum potential compressive strength of the realcrete.
The specifier has a choice:
• Use data from labcrete or labcrete-realcrete (site-made cubes, for
example) as a be-all and end-all, with or without the application of
safety factors.
• Use realcrete data alone.
The second choice applies to a minority of concrete made: dimensionally
coordinated precast concrete products, where the product itself is tested. (I
prefer the phrase ‘dimensionally coordinated’ to ‘standardised’ because
there could be 100 diagrams in a standard deemed to comply, but possibly
only a few could be used with each other.) In-situ concrete and bespoke
precast units such as cladding and cast stone are generally assessed for
strength in a specification by a cube test. The latest standard for cast stone
(BS 1217:1997) accepts this, and has a division between type tests (labcrete-
realcrete) and proof tests (realcrete).
The main points to be addressed are the validity and applicability of
realcrete and labcrete information, and this has to include the common
realcrete-labcrete hybrid, generally known as a cube. To discuss the
materials science and technical requirements of this problem, the test needs
can usefully be listed under three headings:
• The test must be meaningful.
• The test must be accepted by all parties.

Copyright 2003 by Taylor & Francis Group. All rights Reserved.


• The test data must be accepted as final, with minimum or no
interpretation.

6.1.1 MEANINGFUL
It would be logical to assume that the purpose of carrying out a test is to
produce data that relate either directly or, in a constant manner, indirectly
to a necessary or desirable performance characteristic. If strength
(compressive, tensile or shear) is in question, then it would be simple to
assume that a cube or prism result is sufficient. This is difficult to accept,
because a labcrete-realcrete test usually gives maximum potential strength
and little else. The use of the cube or prism density figures (specified to be
calculated and reported) generally gives misleading information. This is
because nominal cubes are tested, and these are not necessarily
geometrically true cubes: up to 1% deviations are permitted on all
dimensions. Thus nominal cubes, all from the same concrete and virtually
equally compacted, with a true density of 2350kg/m3, can have nominal
densities in the range 2280–2420kg/m3.
Therefore, apart from an indication of the maximum potential strength,
there is a risk of sacrificing the target of ‘meaningful’ on the altar of
traditionalism and the attractive cheapness of the cube test. The codes of
practice, such as BS 8110 Part 1:1985, generally apply safety factors to the
cube data to cater for structural design purposes. It could be argued, with
hindsight, that if the rebound hammer had been invented before the
crushing machine this problem would not exist.
This leads to the interim conclusion that, wherever possible, preference
should be given to realcrete testing, if there is any way in which it can be
shown to be of use.
If realcrete testing is the preference for producing meaningful data, then
the next question is: which of the durability hazards listed in section 4.2
are relevant to the concrete being tested? It follows that the parties
concerned with the test regime as well as the testers need to set up a matrix
of properties versus tests so that a sensible application of the available tests
to the concrete can be made.

6.1.2 ACCEPTABILITY
Scientific and technical development of labcrete and realcrete in the
construction industry will proceed only when three factors are addressed:

(a) The test methods (included as costed bill items) are agreed in the
specification.
(b) Test limits or ranges are agreed. If interpretation is likely, this wording
should also be agreed at a preliminary stage.

Copyright 2003 by Taylor & Francis Group. All rights Reserved.


(c) Preserving anonymity of the information source, data are fed back to
the BSI Committee Secretary so that the revision of standards may
make full use of the state of the art.
The current example of the tardiness of acceptability is the BS 1881/200
series (non-destructive testing), all of which are currently
‘recommendations’, and not mandatory.
An example of where (b) above has caused arguments is in the
interpretation of site-drilled, laboratory-tested cores. In an attempt to deal
with the interpretation of cores tested to BS 1881 Part 120, the Concrete
Society produced guidance in the form of a report (Concrete Society, 1976;
Addendum, 1987). The Society has accepted that this report (CSTR11)
needs updating, and is currently carrying out research with this aim in
mind.
CSTR11 suggests various interpretative approaches in trying, amongst
other exercises, to relate core strengths to the strength of cubes that would
have been made from that concrete. However, the suggested operating
factors are based on a small quantity of data, and definitive dogmatism
should be avoided. The other factor relating to acceptability is that no
matter how relevant any test procedure is to the property in question, there
is the contractual matter of timing to consider. For example, if it takes six
months to produce data relating to resistance to chloride ingress, and the
track record shows that this resistance can be achieved by the use of PFA,
GGBS or MS additives, then it makes sense for a testing specification to
concede to a mix design specification.

6.1.3 FINALITY OF DATA


Arguments often arise over the finality of data; many of these are based on
lack of knowledge of the test criteria, including the status of the testing
facility. If the interpretation aspect discussed in section 6.1.2 is brought
into the picture, then it is possible that the wording was somewhat loose.
Therefore it is probably best to aim at a test regime that has a minimum of
or, preferably, no interpretative clauses.
This reflects the discussion in section 4.3, and implies that total quality
control at all stages—from specification to handover—would present the
fewest obstacles to agreement on the finality of the data. The finality would
naturally be based upon the three steps of the test regime’s being
meaningful, acceptable and final.

6.1.4 IDENTIFICATION
The problem reveals itself in the form of contract data invoking labcrete,
labcrete-realcrete and/or realcrete tests that have little or no relation to the

Copyright 2003 by Taylor & Francis Group. All rights Reserved.


properties required and/or are capable of misinterpretation and/or do not
carry bill items to cater for testing.

6.1.5 REMEDIAL
No remedy appears to be possible; the situation would be a
contemporaneous one, and not one that occurs at the tender or pre-tender
stage. A contract review could perhaps be undertaken, in order to deal
with possible problems to come, but this is in the contractual field and
hence outside the remit of this book.

6.1.6 AVOIDANCE
The main thing to avoid is a contractual dispute over any of the items in
sections 6.1.1–6.1.3. One way to achieve this might be for the tendered
parties to adopt a more proactive role, coupled with strong liaison between
all members of the construction team. The setting up of a properties-versus-
tests matrix, mentioned earlier, could well have much to commend it.

6.2 DESIGN OR PERFORMANCE


I have commented in several sections on the testing specification being
design based or performance based. Where the problem in this subject has
reared its head is in a tendency to ignore the factors relating to this choice
and to concentrate—wrongly—on performance testing.
It is only partly logical to conclude that if concrete is required to perform
in a specific manner then a performance test should apply. This conclusion
ignores the many scientific, technical, architectural, engineering and
contractual requirements that also apply. Compounding all this is the
slackness that is sometimes met in the format of those parts of the
contractual documents relating to the materials: slackness in
• the words used;
• the intended meaning of those words;
• the interpretation of the words by the receiving party;
• whether or not the words addressed the property requirement.
It is highly unlikely that concrete would be needed in the construction with
only one property requirement. Thus each property needs to be discussed
in the light of the boundary conditions that pertain. There is growing
pressure from the European standards organisations to concentrate on
performance testing, with an apparent disregard of other considerations.
This could create future problems. This pressure should be resisted;
performance-based specifications should be supported only when they
have minimum interference with buildability, and they relate to sensible

Copyright 2003 by Taylor & Francis Group. All rights Reserved.


property targets. This may mean that a specification has to have a mixture
of design-based and performance-based clauses, but if the five
requirements listed below are considered, this will form the basis for a
logical materials approach:
(a) funder—price, speed and financial return;
(b) specifier—unambiguous, relevant and sensible clauses;
(c) specialists—appreciation of services involved;
(d) contractor and subcontractor—buildability;
(e) tester—timing of data returns and meaningful tests.
Other factors may also be relevant, but it is these five that, singly or in
combination, have led to problems and discussions. None of these
requirements is an independent variable; altering one of them will almost
certainly affect one or more of the others. Chloride diffusion control by
design rather than as a performance-based specification is an example:
input at (b) involves (d) and (e).
Another example commonly met in troubleshooting is the use of air
entrainment to produce frost-resistant concrete (see also section 1.4). A
typical specification for a concrete with 20mm maximum size aggregate
would be 3.5–7.5% total air in the fresh concrete (BS 1881 Part 106:1983).
This specification is design based, but has the aura of a performance test. It
possibly comes into the category of the next section. Consider how this
specification relates to (a)–(e) on the reasonable assumption that the
construction team members wish to have a frost-resistant concrete (putting
aside other property targets such as strength, flatness and appearance):
(a) The funder is unlikely to be affected by the price or speed of putting
the admixture into the concrete. Where the funder may be concerned
is with costs arising after handover or completion due to (b)-oriented
problems.
(b) The specifier will not know whether compliance with the air content
requirement means that the air bubbles are present in the optimum
sizes and geometrical distribution.
(c) Data from the specialist would probably emanate from the admixture
manufacturer and are likely to be misapplied because we are dealing
with labcrete, not realcrete or a hybrid.
(d) The contractor’s buildability is unlikely to be affected unless (b) applies,
in which case remedial or replacement work might be required.
(e) On the basis of (b) and (c) the testing is not likely to be meaningful, but
a delayed timing of data—to wait for petrographic results—will
probably need to be considered if pre-works data have not been
obtained.
There are dangers of ‘tunnel vision’, in concentrating on design at the
expense of performance testing (or vice versa), as well as in considering

Copyright 2003 by Taylor & Francis Group. All rights Reserved.


only one of the five requirements listed and discussed above and over-
looking the others. The choice of design testing, performance testing, or
both, depends upon what the concrete has to achieve in cost-effective
performance terms.

6.2.1 IDENTIFICATION
Look for any documentation in which design testing should have been
specified instead of performance testing, or vice versa, as well as a lack of
consideration of any one or more of (a)–(e) listed above.

6.2.2 REMEDIAL
The only possible remedy for a current situation is to try and obtain a
contractual variation or instruction to cater for the obstructing matters.

6.2.3 AVOIDANCE
Pre-contract discussions or comments at tender stage seem to be the way
to address specific cases. In general, the properties versus materials matrix
proposed in section 6.1 could be used and qualified by method statements
and test data limits. The benefits of having standard contract clauses
addressing each of the construction targets could also be discussed.

6.3 CAMOUFLAGE TESTING


This was one of the types of testing listed in Levitt (1985), which dealt with
the philosophy of testing. Camouflage testing may be defined as any test
requirements or procedures that are completely irrelevant to reasonable
and sensible materials property targets. The problem with camouflage
testing is that it is largely irrelevant, misleading, dishonest, and defies logic.
There are a number of bases for camouflage:
(a) trying to impress others by having a test clause;
(b) copying something that has been done before without checking its
relevance;
(c) catering for a problem by invoking a test that has little or no relevance
to that problem;
(d) promotion of a test facility;
(e) promotion of a proprietary product.
An example of (a) was an instance where one of the construction party’s
aims was to set out before another member of the building team a
considerable amount of test data in order to impress by the amount of
paperwork.

Copyright 2003 by Taylor & Francis Group. All rights Reserved.


Probably (b) is the most insidious form of camouflage testing, because it
reflects strongly on the traditionalism that pervades the construction
industry. The reader may be able to pick out examples in the previous text,
but the author has had to be careful not to mention specifics.
In (c) the case was where a client experiencing a problem with the
concrete was ‘satisfied’ with additional but irrelevant testing. Persons
becoming unwillingly involved in such a situation should record and
document their views to the relevant party.
Both (d) and (e) need no qualification, and examples can be found in the
earlier text.

6.3.1 IDENTIFICATION
The problem reveals itself in the inclusion of a test requirement (method
and/or limits) that is completely irrelevant to a property that should be
under consideration.

6.3.2 REMEDIAL
Unless the requirement is deleted or altered, no remedy is possible.

6.3.3 AVOIDANCE
As with so many of the other problems described earlier, a sensible
discussion between the construction team members at pre-tender or tender
stage is suggested.

6.4 REPEATABILITY AND REPRODUCIBILITY


There is a growing trend to include data on these two properties in both
British and American standards. Briefly, the meanings of these two words
are as follows:
Repeatability refers to the production of data by a specific centre, either
by the repetition of testing on the same sample (non-destructive tests,
for example) or by replicate tests on subsamples from the one master
sample. These data can be produced by more than one operative
working in that centre. Repeatability is commonly described in
statistical terms such as variance, standard deviation or range.
Reproducibility refers to the production of data on nominally
identical subsamples or samples tested at more than one centre, and
compares the results within the group of centres. Again, as with
repeatability, a statistical method is generally used to compare
numbers.

Copyright 2003 by Taylor & Francis Group. All rights Reserved.


The term often used in standards and other documents to describe
repeatable and reproducible data is ‘precision data’.
Concrete is a multi-component (sand, coarse aggregate, water, cement,
admixtures, additives), multi-variable (mixing, compaction, curing)
material. The problems that this causes are twofold. First, whatever test is
being considered there will be variations, and the demand for stringency
in repeatability limits has to be realistic. Second, although statisticians
prefer relatively large numbers of centres to be involved for reproducibility
studies, there have been instances when inferences or ‘conclusions’ have
been drawn from as few as six cooperating laboratories. In my view, the
number should be at least 12.
Therefore, as far as repeatability is concerned, it is possible for a single
centre to produce enough data for a statistical analysis to be meaningful
(assuming that the test being examined is one for which that centre can
produce the required quantity of data with acceptable interference on its
other commitments). However, the test has to be of a common genre and
common to a large number of centres.
So, for concrete testing laboratories, it follows that a study of
repeatability and reproducibility would be feasible for data such as cube
strengths, and aggregate specific gravities, but restrictions could well be
encountered for petrographic tests, oxygen diffusion tests and the like.
Caution is necessary when using statistics, because it is an applied and
not a pure form of mathematics. Because assumptions are made in the
mathematical treatment of data, any results produced are not definitive;
statistics does not ‘prove’ or ‘show’ anything. The results can only indicate
likelihood, comparison, relationship or trend. Section 4.4 discussed the
inadequacy of the normal or Gaussian distribution in catering for cube or
cylinder strength when the target strength is close to the aggregate crushing
strength. For UK aggregates, ultimate strengths in the range 60–100MPa
could be assumed as typical, and so C50 and higher specifications for
concrete strength might well require a different approach for both
specifying and drawing inferences. This application of data would apply
to repeatability tests inter alia.

Example 1
This example concerned the use of the Brinel hardness pistol to assess the
strength of prestressed concrete units in a factory. The pistol used to be in
common use as a hand-held test tool for hardness testing of metals and
alloys. Its principle was to impact a hardened steel ball against the surface
under test; the hardness of the metal was assessed by the diameter of the
spherical impression. (The same principle is now used for metal testing,
but a diamond with strict geometry to its facets is used. All modern test
equipment is in the form of a composite machine.)

Copyright 2003 by Taylor & Francis Group. All rights Reserved.


In the precast concrete works, the manufacturer’s target strength was
45MPa at 28 days. Strict total quality control was exercised, and the cube
strengths obtained lay in the range 40–50MPa. Each time the units were
tested with the pistol an average impression diameter of 3mm was
recorded, with a range from about 2.7 to 3.3mm.
As these data referred to a specific strength-repetitive concrete, a range
of concrete cubes were made in another laboratory with cube strength
targets ranging from 15 to 60MPa at 28 days old. Just before crushing, each
cube was tested with 10 pistol impressions, and the average of these was
compared with each cube result. It was found that, irrespective of the
strength, the impression diameter was always about 3mm.
Two points arise out of this. First, consistency of data can be misleading.
Second, as discussed in section 5.6, the weaker concrete could have been
predicted to have improved resistance, because its energy absorption
characteristic would have been better than that of the stronger concrete.
As an aside, this leads to an apparent anomaly, in that the rebound
hammer generally gives a positive relationship between rebound and
strength; the rebound numbers increase with increasing strength. The
reason for this may be the difference between the relatively large area of
impact of the hammer and the 6mm diameter steel ball in the old Brinel
hardness pistol.

Example 2
The problem relates to the recently issued recommendation for non-
destructive testing of concrete using initial surface absorption (BS 1881 Part
208:1986). The standard refers to the omission of precision data, as there
was not enough information to hand when the standard was prepared.
The ISAT, by its nature, generally measures only the surface voidage
property, and at a relatively short interval from the start of the test.
Observation of a typical concrete surface drying out after rain would
reveal a patchy appearance over distances as small as a few millimetres,
caused by variations in the absorption properties. The sensitivity of the
ISAT would be expected to reflect this variation, and experience has shown
this to be so.
ISAT units are specified to be recorded in units of mL/m2.s, and the
apparatus has minimum and maximum range limits of 0.01 and 3.0 of these
units respectively. At the lower end, the result can be read to an accuracy of
0.01, and at the higher end to 0.2.
In practice it has been found that, taking readings at 10 minutes as
examples, concrete averaging 0.01 will vary from zero to 0.03. The more
permeable example would vary from 2.6 to ‘too fast to measure’. This, in
my opinion, indicates that precision data will be difficult to obtain for the
ISAT, and that it is unrealistic to expect ‘ideal’ repeatability and

Copyright 2003 by Taylor & Francis Group. All rights Reserved.


reproducibility. An additional problem found on site with the low (say
0.01) results is that if the reading is taken as the sun starts to shine on the
equipment, a liquid expansion occurs in the cap and a ‘negative’ absorption
can be recorded.

6.4.1 IDENTIFICATION
There may be pressure to ask for precision data when they are either not
justified or irrelevant, as well as the use or tabulation of data based upon a
small number of results.

6.4.2 REMEDIAL
In a current situation there would appear to be no remedy apart, possibly,
from a review of or amendment to the conditions of application.

6.4.3 AVOIDANCE
The recipients of precision documentation in standards, specifications and
regulations preparation should take a proactive role. A defensive, reactive
response to the receipt of such data is not constructive.

6.5 CHANGES IN TESTING


The problem is, quite simply, tradition. This takes the general form of strong
resistance to anything new. I neither condone nor condemn this attitude,
but I cannot agree with a generalisation either way. If a test has been
established for a long time this neither means that it is the right test (see
6.1–6.4) nor that there is necessarily a better test that could take its place.
The problem is probably exacerbated by the lack of use of the currently
available mechanisms to correct the problem. It is logical for members of
the construction team to accept that testing needs to have a nominated
position in the control of material properties. If testing is a weak link in the
chain joining performance to materials, design and workmanship then
science and technology will have little or nothing to contribute to
construction. The best way to tackle this problem is to study each test
requirement on the basis of the matrix suggested earlier in this chapter,
and then do one of the following:
(a) Confirm and/or reinforce that test.
(b) Replace it with a different test.
(c) Run a new test alongside the existing test: that is, (a)+(b).
(d) Remove the test requirement completely.
(e) Introduce a test where there was no test before.

Copyright 2003 by Taylor & Francis Group. All rights Reserved.


It is in respect of (c) that the future appears to be the most attractive. Both
British and European standards have a tendency for a specific test to be the
‘reference’, with other tests being subsidiary. For several British Standards,
where an alternative test to the reference test is included, users are
requested to submit data (but obviously not contract details) to the BSI.
This is a good idea, but lacks the power to change things. It might be better
to make both the reference and the alternative tests mandatory, with all
results to be sent to the BSI. (The BSI would be the secretariat for national
as well as European and international standards).
The complete removal of a test, as listed under (d), can form a large
discussion platform. Over the years many revised standards have omitted
earlier test specifications. Reasons for the omission of a test would be given
in the revised standard. There is no reason to conclude that this process is
complete; there are still some tests that have no reason for their presence
other than tradition.
By the same token, under (e), there is no reason to conclude that every
test necessary to define a property or performance need is present in every
Standard. If the matrix approach suggested in sections 6.1–6.3 is acceptable,
a method of dealing with the problem and its spin-offs could result.

6.5.1 IDENTIFICATION
The problem reveals itself as a reliance on inappropriate or misplaced tests,
often coupled with a resistance to consider or accept anything new or
different.

6.5.2 REMEDIAL
Apart from discussing the possibility of variations to the test requirements
there seems to be little that can be done to remedy a current problem.

6.5.3 AVOIDANCE
Refer to section 6.4.3 for a nominally identical approach. BSI publications
such as BSI News provide a monthly update on the progress of British,
European and international standards. In addition, any person can
purchase a draft at the public comment stage and submit their opinion to
the relevant secretariat.

6.6 TESTING FIXATION


Although this title implies that the problem is a mixture of the earlier
discussion in sections 6.3 and 6.5, there is in fact a completely different

Copyright 2003 by Taylor & Francis Group. All rights Reserved.


facet that warrants exposure. The problem encountered was a dogmatic
insistence that wherever or whenever a property or requirement was under
consideration there had to be a test accompanying that part of the
specification. This insistence was often found to generate a spin-off
problem in the form of a reversal, which commonly manifested itself as an
insistence on some form of property requirement so that a test could be
proposed to accompany it.
An example of test insistence that in my opinion was (and still is) largely
unjustified was described in reference to aluminous cement in section 1.7.
The test was differential thermal analysis on drilled powder samples taken
from precast pretensioned concrete beams, made of high alumina cement
(as it was then called and is still known), in order to ascertain the degree of
conversion. Virtually every construction examined in my experience
showed 70–90% conversion, with stable performance of the precast units
and the construction.
An example of testing that was not really sensible was described in
section 3.1 in reference to chloride diffusion, where track records have
shown that good performance has been achieved by the mix design route.
Testing would have not furnished data of significance for about 6 months,
and such a potential contract delay to await test results would have been
unacceptable to most parties in the construction team.
It is debatable whether either the alkali-silica reaction (section 1.5) or
delayed ettringite formation (section 1.12) comes into the spin-off category
referred to above. In my experience damage has been almost certainly due
to ASR on only three constructions. As far as DEF is concerned, apart from
the possibility of its having been the cause of the splitting observed in
experimental kerbs described in section 1.12, no case on site has been
experienced.

6.6.1 IDENTIFICATION
Someone will insist on the presence of a test and/or call up a property,
whether relevant or not, so as to have a test to address that property.

6.6.2 REMEDIAL
If discussion is possible, and logic can be applied, a change in the wording
to the testing or property documentation should be attempted.

6.6.3 AVOIDANCE
The most fruitful approach would seem to be full discussion at committee,
institution or authority levels before requirements are put into formal
documents.

Copyright 2003 by Taylor & Francis Group. All rights Reserved.


6.7 TESTING ACCURACY
The problem refers to the data produced from a test initiated at the
specifying stage by the demand for an impossible or unreasonable accuracy,
and at the reporting stage by a form of impressionism. Examples are given
below.

6.7.1 PROBLEMS AT SPECIFYING


A common example of this family of problems is in typical specification
wording such as ‘The concrete cube strength shall be SOMPa at 28 days
old’. There are two virtual impossibilities here. First, even under the strictest
form of production, it is impossible to get each cube to reach the exact
strength of 30MPa at that age. Second, cube strengths are specified to be
reported to the nearest 0.5MPa, so the 30MPa (if it had been possible to
achieve) should be 30.0MPa. The omission of that SOMPa being specified
to be a minimum, maximum or average could also be called into question.
In other instances, dimensions have been specified to an accuracy of
1mm, and tolerances have been completely omitted from the drawings. In
the former case, the contractor or producer was being asked to work to the
unachievable; in the latter case, no tolerance would seem to be permissible
in the work.

6.7.2 PROBLEMS AT REPORTING


An example of this is with a typical cube-testing machine that ‘locks in’ the
failing load reading to the nearest 1kN. Thus it would appear that a 100mm
(nominal) cube failing at 424kN load could be reported to have had a
42.4MPa strength. Putting aside the specification requirement of reporting
to a 0.5MPa accuracy, this report ignores the machine accuracy. At the best
this would be no better than 1% under a Class 1 machine certification. It
also ignores the cube’s being only nominal in size, with a 1% allowance on
all dimensions. Therefore the 42.4 could be anywhere between 42.0 and
42.8 on the machine accuracy. The crushing area of the cube could be up to
2% larger or smaller than the specified nominal size calculation. Taking
the largest negative and positive cube area sizes on the final load range,
the actual cube strength (ignoring other testing variables) could lie
anywhere in the range 41.2–43.6MPa. So although the specified reporting
accuracy for this cube gives a strength of 42.5MPa, it is still subject to an
error of about 1MPa.
Another example of a reporting problem is with a water absorption test
on, say, an approximately ‘cubic’ sample of 100mm ‘side’ cut from concrete.
It, and its weight changes from oven drying to wetting, can usually be
measured to an accuracy of 1g. For a sample weight of about 2kg, this

Copyright 2003 by Taylor & Francis Group. All rights Reserved.


represents an accuracy of 0.05%. If an operative carried out 30 minute
absorption tests on three subsamples and obtained readings of 3.50%, 3.50%
and 3.55%, an average of 3.52% could be reported. The lesson here is that
reporting accuracy should not be based upon unnecessary mathematics,
which can give answers indicating a form of superiority.

6.7.3 IDENTIFICATION
Look for an unreasonable or impossible accuracy specified or an unjustified
accuracy in the data reported.

6.7.4 REMEDIAL
In the first example, the specifier should be advised of the impossibility or
inapplicability of the requirement; in the second example, the report should
be returned to the issuing activity. The corrected replacement report should
have the same report reference number as the superseded one but be
marked ‘Rev’ or ‘Superseding Report No......’ or similar, and the superseded
report should be marked as such.

6.7.5 AVOIDANCE
Both specifiers and testers should be aware of the problems that can be
generated, and should take appropriate steps to avoid them. Other
members of the construction team should also draw the attention of the
specifier or the testing authority to any cases that come into their remits.

Copyright 2003 by Taylor & Francis Group. All rights Reserved.

You might also like