Professional Documents
Culture Documents
L4b Systematic Evaluation and Selection of Materials - Revised May 2022
L4b Systematic Evaluation and Selection of Materials - Revised May 2022
2.
For identifying the required performance (i.e. step 1), a general list of performance
requirements that caters for a wide range of possibilities is very useful. The following list is
recommended. Each of these performance requirements is represented by a set of materials'
properties and standards governing acceptable values.
2. Fire Safety. Includes resistance against the effects of fire such as flame propagation, burn
through, smoke, toxic gases, etc.
4. Durability. Includes ability to withstand wear; weather resistance such as to ozone and
UV; dimensional stability, etc.
Note: In using a sealant will it stain adjacent surfaces, will there be any chemical interaction
with other backup materials?
Note: For factory baked-on paint finishes, are there any satisfactory retouching materials
to cover scratches or other minor defects resulting from installation or use?
9. Economics. Includes installed costs, maintenance costs, budgetary limitations. May also
include results of cost/benefit analysis, NPV, or other economic evaluation indices.
10. Local Availability. Considers whether the material must be imported and if so, the lead
time required.
3.
The Evaluation Matrix (i.e. Step 6):-
When evaluating a material or product for a specific application, the objective is to determine
the relevant capabilities in terms of the performance requirements. That is, to determine the
values of the relevant properties for the specific material or product. A value may or may not
be acceptable, but must be known. Hence test data obtained for the project must be available,
or the rating is “unknown”.
4.
Upon evaluating a set of optional materials or products, they can be compared for selecting the
one most suitable. In this case, the 6-column evaluation matrix is used and a Rating Matrix is
then developed to systematically arrive at the best choice (i.e. step 7).
5.
The final decision can take into account the views of all members of the project team if this is
desired. Recall that the Rating Matrix considers objective factors (test results, etc), and
subjective factors - the judgement of the engineer, and possibly the project team members.
6.
For each optional material or product, points are awarded for each performance requirement
where these points are selected from a rating scale, say from 0 to 8 as shown below.
8 = Excellent satisfaction
7 = Good satisfaction
6 = Average satisfaction
5 = Moderate satisfaction
4 = Poor satisfaction
3 = No satisfaction
2 = Very unsatisfactory
1 = Extremely unsatisfactory
0 = Should not be considered
X = Unknown. A value of 1 can be used if it is required that all scores must be numerical. But it
is useful to know what percentage of the total rating has Xs.
For each item under each performance requirement, and for the objective ratings, the engineer
compares actual values measured for the candidate project materials, with required values.
Required values come from analysis, for example, the calculated embodied carbon for the
component, or the bending moment diagram for a beam. This is done in order to determine
the level of satisfaction, hence the score under the “lab tested”, or “field tested” column. If test
data is available not from the engineer’s testing but from standards or codes, the comparison
hence score is placed under the “standard” or “code” column. For example, soil investigation is
quite expensive but standards or codes usually have information on the expected bearing
capacity values based on a visual description of the soil. Similarly, if a concrete batch is known
to be of a particular mix of cement:fines:coarse, and w/c, standards or codes usually provide
information on the expected 28-day compressive strength value.
The cost of engineering materials or systems testing is usually high. Therefore, it is very useful
to utilize information from former projects that are very similar and in terms of the known
performance of the material. This is as regards both the demands placed on the material, and
how well the material handled those demands. Ideally, the quantitative test data (e.g.
strength) and analytical data (e.g. applied stress) is readily available from the records of those
former projects. In such a case, then the applicable columns are under the “historic findings”.
If the historic findings are from the engineer’s knowledge from former projects, the score is
placed under “past performance”. If from the end-user’s (or owner’s) knowledge from former
projects, the score is placed under “end-user”. The age of the historic data affects the reliability
of the data. This is accounted for under the “time-frame” column such that more recent data
will obtain a higher score.
Lastly, if data is not available from any source, then subjective scoring can be used and hence
based on the opinions of persons associated with the project. The engineer’s score is placed
under the “consultant” column, and the placement of the scores from the other persons
associated with the project, are as shown.
7.
The Rating Matrix is therefore a scorecard for an optional material or product. Weights can be
assigned to certain performance requirements (or team member's views) to reflect higher
priorities. The final score for the specific material or product is determined by dividing the total
points by the number of performance requirements. See the handout example. The material
or product with the highest score is the one that is most persuasive for acceptance.
In order to ensure that the evaluation is mostly objective, the engineer may set a condition that
the conclusions will only be accepted if at least a certain percentage of the scored items is
greater than a certain amount, and that the number of “unknown” scores is less than a certain
amount. For example, say 75 percent for the former, and 10 percent for the latter. The
engineer may also stipulate that certain data must come from testing.