You are on page 1of 3

1

SYSTEMATIC EVALUATION AND SELECTION OF MATERIALS


1.
General Procedure:-
1. Identify performance required.
2. Establish evaluation criteria.
3. Acquire test results if not available.
4. Acquire historical information.
5. Select a range of optional materials or products.
6. Develop an evaluation matrix for each material or product.
7. Developing a rating matrix for each material or product.

2.
For identifying the required performance (i.e. step 1), a general list of performance
requirements that caters for a wide range of possibilities is very useful. The following list is
recommended. Each of these performance requirements is represented by a set of materials'
properties and standards governing acceptable values.

1. Structural Serviceability. Includes strength; stiffness; resistance to indentation, etc.

2. Fire Safety. Includes resistance against the effects of fire such as flame propagation, burn
through, smoke, toxic gases, etc.

3. Habitability. Includes liveability relative to thermal comfort, acoustic properties, water


permeability, optical properties, hygiene, general comfort, light and ventilation, etc.

4. Durability. Includes ability to withstand wear; weather resistance such as to ozone and
UV; dimensional stability, etc.

5. Practicability. Ability to surmount field conditions such as transportation, storage,


handling, tolerances, connections, site hazards, etc.

Note.- Transportation of huge prefabricated elements will require investigation with


respect to roads, bridges, and tunnels to assure passage. Investigation of tolerances of
dissimilar elements such as a concrete frame or a structural steel frame to receive precast
concrete or metal and glass curtain walls.

6. Compatibility. Ability to withstand reaction with adjacent materials in terms of chemical


interaction, galvanic action, ability to be coated, etc.

Note: In using a sealant will it stain adjacent surfaces, will there be any chemical interaction
with other backup materials?

7. Maintainability. Ease of cleaning, repairability of punctures, gouges and tear, recoating


etc.

Note: For factory baked-on paint finishes, are there any satisfactory retouching materials
to cover scratches or other minor defects resulting from installation or use?

8. Code Acceptability. Includes review of code compliance and manufacture's claims as to


code compliance. This includes embodied carbon per environmental sustainability
codes such as LEED (U.S based - Leadership in Energy and Environmental Design), BREEAM
(U.K based - Building Research Establishment Environmental Assessment Method), etc. It
also includes engineering codes for earthquakes, hurricanes, and floods (e.g. ASCE 7 and
ASCE 24).

9. Economics. Includes installed costs, maintenance costs, budgetary limitations. May also
include results of cost/benefit analysis, NPV, or other economic evaluation indices.

10. Local Availability. Considers whether the material must be imported and if so, the lead
time required.

11. Functionality. Considers its ease of use or user-friendliness especially in terms of


manufacturing or fabrication.

3.
The Evaluation Matrix (i.e. Step 6):-
When evaluating a material or product for a specific application, the objective is to determine
the relevant capabilities in terms of the performance requirements. That is, to determine the
values of the relevant properties for the specific material or product. A value may or may not
be acceptable, but must be known. Hence test data obtained for the project must be available,
or the rating is “unknown”.

The evaluation matrix is comprised of 4 essential columns - performance requirements,


properties, tests, and results. If the intention is to use the evaluation as part of an overall
selection process, then 2 additional columns are required - desired value, and pass/fail. The
desired value is obtained from the analysis of the component under the particular situation.
For example, the bending moment under the applied load. The desired value can also be a
particular stipulation by the design engineer, such as the grade of steel to be used. If the
desired value is better than the value obtained by testing, then this is considered a “pass”. See
the attached handout showing an example evaluation.

4.
Upon evaluating a set of optional materials or products, they can be compared for selecting the
one most suitable. In this case, the 6-column evaluation matrix is used and a Rating Matrix is
then developed to systematically arrive at the best choice (i.e. step 7).

5.
The final decision can take into account the views of all members of the project team if this is
desired. Recall that the Rating Matrix considers objective factors (test results, etc), and
subjective factors - the judgement of the engineer, and possibly the project team members.

6.
For each optional material or product, points are awarded for each performance requirement
where these points are selected from a rating scale, say from 0 to 8 as shown below.

8 = Excellent satisfaction
7 = Good satisfaction
6 = Average satisfaction
5 = Moderate satisfaction
4 = Poor satisfaction
3 = No satisfaction
2 = Very unsatisfactory
1 = Extremely unsatisfactory
0 = Should not be considered
X = Unknown. A value of 1 can be used if it is required that all scores must be numerical. But it
is useful to know what percentage of the total rating has Xs.
For each item under each performance requirement, and for the objective ratings, the engineer
compares actual values measured for the candidate project materials, with required values.
Required values come from analysis, for example, the calculated embodied carbon for the
component, or the bending moment diagram for a beam. This is done in order to determine
the level of satisfaction, hence the score under the “lab tested”, or “field tested” column. If test
data is available not from the engineer’s testing but from standards or codes, the comparison
hence score is placed under the “standard” or “code” column. For example, soil investigation is
quite expensive but standards or codes usually have information on the expected bearing
capacity values based on a visual description of the soil. Similarly, if a concrete batch is known
to be of a particular mix of cement:fines:coarse, and w/c, standards or codes usually provide
information on the expected 28-day compressive strength value.

The cost of engineering materials or systems testing is usually high. Therefore, it is very useful
to utilize information from former projects that are very similar and in terms of the known
performance of the material. This is as regards both the demands placed on the material, and
how well the material handled those demands. Ideally, the quantitative test data (e.g.
strength) and analytical data (e.g. applied stress) is readily available from the records of those
former projects. In such a case, then the applicable columns are under the “historic findings”.
If the historic findings are from the engineer’s knowledge from former projects, the score is
placed under “past performance”. If from the end-user’s (or owner’s) knowledge from former
projects, the score is placed under “end-user”. The age of the historic data affects the reliability
of the data. This is accounted for under the “time-frame” column such that more recent data
will obtain a higher score.

Lastly, if data is not available from any source, then subjective scoring can be used and hence
based on the opinions of persons associated with the project. The engineer’s score is placed
under the “consultant” column, and the placement of the scores from the other persons
associated with the project, are as shown.

7.
The Rating Matrix is therefore a scorecard for an optional material or product. Weights can be
assigned to certain performance requirements (or team member's views) to reflect higher
priorities. The final score for the specific material or product is determined by dividing the total
points by the number of performance requirements. See the handout example. The material
or product with the highest score is the one that is most persuasive for acceptance.

In order to ensure that the evaluation is mostly objective, the engineer may set a condition that
the conclusions will only be accepted if at least a certain percentage of the scored items is
greater than a certain amount, and that the number of “unknown” scores is less than a certain
amount. For example, say 75 percent for the former, and 10 percent for the latter. The
engineer may also stipulate that certain data must come from testing.

You might also like