Active Problem-based Learning on Nano-amended Cement

Composites for Nuclear Waste Storage for Civil and
Environmental Engineering Undergraduate Students

Abstract
For two consecutive years (fall 2014 and 2015), part of a civil engineering materials laboratory
course at the University of South Carolina, Columbia, was dedicated to create in-class
environments for student-centered, problem-based learning. The goal was to elicit critical
thinking in the complex and increasingly important area of nanomaterials in civil and
environmental engineering. To this end, each class of over 30 third- and fourth-year
undergraduate students participated in two 1.5-hour class periods.

In the first class period, the students: (a) watched three short videos illustrating the relevance of
and basic concepts related to nanomaterials; and (b) were asked to estimate a solution to a
driving question set in the context of nano-amended cement composites for below-ground
nuclear waste storage at a nearby site, where leaching is of concern. The driving question was
“What is the amount, using wt% as units (i.e., percent by weight of cement), of multiwalled
carbon nanotubes needed to attain at least a 20% increase in compressive strength in Type I
ordinary Portland cement mortar”. This question was presented in a decision worksheet that
included supplemental questions that were designed to lead the students to progressively refine
their solution after strengthening their understanding of fundamental concepts (e.g., particle size
and proportions within the mortar microstructure, nanomaterial processing for physical and
chemical compatibility with hydrated cement, and salient properties affecting strength and
radionuclide retention, such as porosity). The students worked on the decision worksheet
individually, and then in teams as a “think-pair-share” exercise.

In the second class period, the instructors presented slides on recent results of research sponsored
by the US Department of Energy. The presentation was made interactive by introducing active-
learning exercises before presenting salient results. These exercises were designed to refine the
answer to the driving question by engaging the students in critical thought through reasoning,
decision making, and problem solving.

This paper reports on the salient results of this two-year experience. In particular, it is discussed
and demonstrated how the decision worksheets and written evidence from active-learning
exercises were used to extract information to help understand how students: (a) learn about and
apply knowledge of new and career-relevant information; and (b) influence each other’s learning
processes when placed in front of a hard and yet relevant question, and given the opportunity to
engage in problem-based learning.

Objectives
This project aims to assess methods of evaluating students’ critical thinking from written
responses in an engineering-focused class environment by examining how students adsorb,
process, and apply new information through multiple iterations of similar active learning
exercises, with new information presented between each iteration; examining how group
dynamics influence students’ responses and thinking processes through the evolution of
responses from individual to team to individual again, through multiple iterations of active
learning exercises; and identifying factors that may influence students’ thinking processes during
these exercises, and developing suitable performance measures.
Problem-based and active learning in the classroom are increasingly important in developing the
necessary critical thinking skills of an engineer [1,2]. Decision worksheets help students
progressively learn complex material (e.g., nanotechnology) through questions that are designed
to progressively refine the answer to the driving question, after strengthening the understanding
of fundamental concepts [3]. Here, the active learning exercises are based on a problem-based
learning framework called “Environments For Fostering Effective Critical Thinking
(EFFECTs)”, which was developed at the University of South Carolina, Columbia (USC)
through NSF funding [4-6]. The findings will inform instructors on the effectiveness of active
learning exercises in the classroom.
These activities provided an opportunity to investigate student’s preconceived notions of
nanomaterials and how certain teaching methods and learning activities affect how students learn
about nanomaterials. This was achieved by considering two objectives: how student’s responses
changed between individual and team settings, and how students’ responses evolve through
various individual and team activities as they are presented with new information. Focusing first
on one repeated question that requires students to answer with a sketch, students’ responses are
followed from the first activity to the third activity to determine how students’ perceptions of
nanomaterials have evolved.
This paper aims to use the information from these activities to explore how students learn about
nanomaterials. Responses to other questions that were only repeated once during the activity
were also analyzed to compare the individual and team activities to investigate how group
dynamics came into play. These questions were concerned with identifying and rationalizing
information needed to answer the driving question, providing possible insight into how confident
students are in their answers, and how effective their rationale was in convincing their peers of
the validity of their response.
Background
During the Fall 2014 and Fall 2015 semesters, students in ECIV 303L (Civil Engineering
Materials Laboratory) at USC, were introduced to nanomaterials in civil and environmental
engineering via short informational videos, decision worksheets, and a lecture presentation on
ongoing research on nuclear waste storage. The module consisted of two 1.5-hour sessions.
Session I consisted of: three Nanotechnology Undergraduate Education videos (watched at home
and in class for about 20 minutes); decision worksheet done individually (40 minutes); and
decision worksheet done in teams of four to five students (30 minutes). The decision worksheet,
entitled “Portland Cement Mortar for Nuclear Waste Storage” [7], asked to work on a series of
questions designed to lead students to answer a difficult driving question (“What is the amount in
percent by weight of cement of multiwalled carbon nanotubes needed to attain at least a 20%
increase in compressive strength in Type I ordinary Portland cement mortar?”), requiring
students to make educated assumptions along the way as to the parameters, values, and rationales
needed to answer the driving question.
In Session II, the instructor presented the results of ongoing research on nuclear waste storage in
South Carolina, with intermediate breaks for individual active learning exercises. The outcome
consisted of written answers on supporting questions similar to those of the decision worksheet
of Session I [2]. These questions were designed to create a problem based learning environment
that would elicit critical thinking [7] by understanding and visualizing fundamental concepts
(e.g., particle size and proportions within the mortar microstructure, nanomaterial processing for
physical and chemical compatibility with hydrated cement, and salient properties affecting
strength and radionuclide retention, such as porosity).
Methodology
Research conducted thus far has encompassed two phases of a larger project design, namely:
Phase 1) Evaluating visuals from decision worksheets from Session I and active learning
exercise responses from Session II, by: (a) developing a rubric to evaluate visuals developed to
understand nano-scales and proportions within the mortar microstructure; (b) comparing
drawings from both Session I and Session II to rubric to evaluate; and (c) recording results. The
visuals were in response to the question “In the space below, draw a magnified view of the nano-
reinforced mortar at an appropriate nanoscale. List the dimensional scale and label for all of the
parts in your sketch.”
Phase 2) Compiling and comparing data from the remaining written responses by: (a) collecting
data from all responses, including quantitative data from written responses and qualitative data
from all responses using established rubrics; and (b) categorizing and arranging data, and
comparing by category (i.e., individual responses, team response, by question, by session, etc.).
Sketches from Phase 1 were first evaluated using the rubric provided in Table 1, resulting in a
broad overview of the impact of different factors throughout the activities. Schematics from the
first activity (individual, first worksheet) and the final activity (individual, second worksheet,
after team activity and introduction of new material on nanomaterials) were scored according to
this rubric to determine overall net improvement. Scores of Poor, Good, and Excellent were
assigned using a photo of the actual structure of nanotube-reinforced cement mortar as the
standard.
In response to these results, the initial rubric (Table 1) was refined in order to track more specific
factors believed to indicate evidence of the learning process during the activities. Phase 1
sketches were then evaluated based upon two rubrics, one focusing on students’ ability to
identify salient elements in the schematic of Portland cement mortar amended with carbon
nanotubes (Table 2), and one focusing on students’ understanding of the nanoscale (Table 3).
Representative examples of low-scoring and high-scoring schematics are shown in Figure 1(a)
and Figure 1(b), respectively.
Ranking 1 – Poor 2 – Good 3 – Excellent
Does the sketch Sketch contains one or two Sketch contains Sketch contains nanotubes,
contain and label all relevant elements, may or nanotubes and a scale, all correct elements of
reasonable and may not be labeled along with one or two mortar, and an appropriate
necessary elements? other relevant elements, scale. All elements are
most items have labels correctly labeled
Does the sketch Sketch shows elements out Most elements of sketch Sketch offers a realistic and
represent an of proportion, unrealistic are proportionate, with accurate representation that
appropriate and configurations, etc. reasonable is similar to actual
reasonable configurations observations
configuration of
elements?
Does the sketch Sketch shows little Sketch shows some Sketch demonstrates
demonstrate understanding, may be understanding, may understanding of subject
understanding of the chaotic in nature, show have one or two errors, matter through clear,
subject matter? inappropriate labeling or but clearly shows knowledgeable, and correct
element shapes thought and planning drawings
Table 1. Preliminary Rubric for Evaluation of Schematics.

Ranking 1 – Poor 2 – Fair 3 – Good 4 – Excellent
Does the sketch Sketch contains Sketch contains at Sketch contains at Sketch contains all
contain and one or no least two necessary least three necessary necessary elements and
label all necessary elements; may elements and no no unnecessary
reasonable and elements; no contain one or more unnecessary elements; all elements
necessary elements are unnecessary elements; at least are correctly labeled;
elements? labeled elements; at least one three elements are sketch may offer more
or two elements are correctly labeled information through
correctly labeled captions
Are Sketch shows Sketch shows Element shapes are Depiction of all
representations little general mostly reasonable elements are reasonable
of elements understanding of understanding of and realistic, with and similar to actual
correct and appropriate appropriate element one or two errors photographic
realistic element shapes; shapes observations
depictions? may be chaotic in
nature
Table 2. Second Rubric for Evaluation of Schematics – Elements.

Ranking 1 – Poor 2 – Fair 3 – Good 4 – Excellent
Does the sketch The sketch does The sketch contains a The sketch contains a The sketch contains an
contain an not contain a scale scale, but it may be reasonably appropriate scale; may
appropriate inappropriate appropriate scale; label elements with
scale? may attempt to label correct sizes
elements with
reasonable sizes
Are element Element Elements are Element sizes are Element proportions are
proportions proportions are different sizes, but approaching correct appropriate, realistic,
correctly inappropriate; still in inappropriate proportions; may and similar to actual
represented? may all be proportions have one or two photographic
roughly the same errors observations
size
Table 3. Second Rubric for Evaluation of Schematics – Scale.
Figure 1(a). Example of Low-Scoring Schematic.
Figure 1(b). Example of High-Scoring Schematic.
During Phase 2, responses from the first activity (individual) and second activity (team) were
transcribed into digital records. Responses were then analyzed to determine if and how various
group dynamics had affected students’ understanding of the material. Methods utilized included
identifying common words and concepts used, as well as the frequency with which these words
occurred, recording frequency of various response formats, and then comparing the individual
and team responses to identify any patterns in the evolution of responses.
Phase 2 also involved identifying and tracking misconceptions evident in the responses about
nanomaterials and the nanoscale. After collecting this quantitative data from the responses,
responses from Question 2 of the first and second activities (“What information do you think you
need to know to answer the driving question? List all of the factors and/or variables that are
needed and identify which ones are the most important. For each factor/variable listed, offer a
rationale for why it is needed (for example, through what mechanism/s does it affect strength
enhancement?)”) were qualitatively evaluated using the rubric presented in Table 4. This rubric
was designed to gain an overview of the general state of student’s understanding of
nanomaterials and the nanoscale. Scores of Poor, Good, and Excellent were assigned using a
“correct” response as the “excellent” standard, and the question prompt, as a guide.

Ranking 1 – Poor 2 – Good 3 – Excellent
Does the response Offers no rationale Offers limited or Offers in-depth rationale
offer a rationale? shallow rationale
Does the response Offers no or few Offers some parameters, Offers relevant
provide sufficient parameters/questions, does may pose questions information/parameters that
information/fully not consider driving about behaviors, may are sufficient to answer the
answer the question? question offer irrelevant driving question
information
Does the response Does not attempt to Makes some attempt to Connects parameters and
attempt to connect connect parameters connect parameters, offers in-depth demonstration
parameters to may make irrelevant of how these affect each
demonstrate how connections other and the driving
they affect answers question
the driving question?
Table 4. Rubric for Evaluation of Written Responses to Question 2.

Results and Discussion
In Phase 1, preliminary scores ranged from 3-8, with the exception of missing schematics being
disregarded, on a scale of 0-9. The scores from the first activity were subtracted from the scores
of the third and final activity, to give an “overall improvement” score for each student. Table 5
summarizes these overall improvement scores, which range from −2 to +4, for the class.
While 46.9% of students failed to improve their scores, the overall positive improvement score
for the class suggest some evidence of learning processes occurring during the activities. It is
important to note that an average improvement of 0.94 points, on a scale of 0-9, translates to a
rough improvement of just over an entire letter grade on a traditional university grading scale (an
improvement of 10.4 points on a 100-point scale).
Phase 1, preliminary rubric (Table 1)
Sum of all improvement scores 30.0
Average 0.94
Standard Deviation 1.64
Percentage of negative improvement scores 15.6
%
Percentage of neutral improvement scores 31.3
%
Table 5. Results of Preliminary Schematic Evaluation.

After the initial rubric (Table 1) was redesigned to specifically evaluate elements and scale using
two separate rubrics for elements (Table 2) and scale (Table 3), the overall improvement for
these separate elements decreased, but still showed a positive overall improvement, as
highlighted in Table 6. Also, fewer students (37.6%) showed neutral or negative improvement, a
decrease of 9.3%. Students showed more improvement in identifying and labeling elements than
in developing a sense of the nanoscale, with overall improvements of 9.8 and 6.2 points,
respectively, on a scale of 100 points.

Phase 1, second rubrics (Table 2 and Table 3)
Elements Rubric (Table 2) Scale Rubric (Table 3) Average
of both
Sum of all improvement scores 28.0 Sum of all improvement 18.0 23.0
scores
Average 0.88 Average 0.56 0.72
Standard deviation 1.08 Standard deviation 1.54 1.31
Percentage of negative improvement scores 12.5% Percentage of negative 18.8 15.7%
improvement scores %
Percentage of neutral improvement scores Percentage of neutral 21.9 21.9%
21.9% improvement scores %
Table 6. Results of Second Schematic Evaluations.

In Phase 2, the first question posed to students was the overall driving question for the entire
activity, “What is the amount, using wt% as units (i.e. percent by weight of cement), of
multiwalled carbon nanotubes (MWCNTs) needed to attain at least a 20% increase in
compressive strength in Type I cement mortar?” Values given range from 0.0001 to 11 wt% for
individual responses, and 0.05 to 2.54 wt% for team responses. Since the range between the
highest and lowest values decreased by 77% from the individual responses to the team responses,
it seems that the extreme outlying responses were discredited by group discussion. None of the
values were unreasonably large, suggesting that students already had a conception of nanotubes
as significantly smaller and stronger elements compared to others in the cement mortar. As
students transitioned to team exercises, half (4) of the 8 groups used the same answer for the
team worksheet as was previously given by one of their group members. Other groups seemed to
reach a consensus amongst themselves, with two groups calculating a new value during the team
exercise, two choosing a value that a majority of team members had answered, and one with a
group consensus guess. With only two groups providing calculations to support their answer, it
may be assumed that a majority (6) of the groups chose an answer that seemed “good enough” or
was convincingly defended by the original author, which had already been suggested by a group
member.
The remaining three questions on the worksheet for the first and second activities were designed
to encourage critical thinking about the considerations required to answer the driving question,
and ultimately lead students to an educated estimate. Question 2, posed after students were asked
to provide a sketch in Question 1, asked “What information do you think you need to know to
answer the driving question? List all of the factors and/or variables that are needed and identify
which ones are the most important. For each factor/variable listed, offer a rationale for why it is
needed (for example, through which mechanism/s does it affect strength enhancement?)”. The
first aspect of the responses considered is the format in which students gave their responses. As
shown in Figure 2, the majority of responses include both bulleted lists of parameters and
questions or comments about the properties, behaviors, and interactions of certain parameters.

Figure 2. Percentages of Various Response Formats, Question 2.

Bulleted lists were the most common response format. Only 7% of total responses used only
questions about properties, etc. Exceptions include a response that was phrased in two
paragraphs, and a response containing a bulleted list consisting of categories of properties. This
information confirms that students are approaching the activity with technically-minded
thinking, as is expected from engineering students, but also demonstrates a little deeper thinking
about how these parameters are affected by processes and interactions. Students were also
specifically asked in the question to provide a rationale for each piece of information given and
rank them in order of importance. Only 35% of individual responses and 38% of team responses
were numbered or ranked according to importance. More surprising was that only 37% of team
responses offered an explanation for their parameters compared to 94% of individual responses.
This data suggest that students may have used their explanations given on the individual
worksheets to support their answers during group discussion, but not considered it important or
remembered to reiterate those rationales on the team worksheet once the group was in
agreement.
The rubric presented in Table 4 was used to qualitatively evaluate the responses to Question 2.
Criteria were laid out to rank responses as Poor, Good, or Excellent in three categories. The
average of these three scores became the overall score for each response. Percentages of the
occurrence of each rank for the responses are presented in Figure 3. Quality responses were
considered to be those that listed relative parameters, offered a reasonable rationale for said
parameters, and made an attempt to think more critically as to the influence of each parameter.
The quality of the responses decreased by 13% between individual and team responses, partly
due to the lack of rationale given in team responses. As mentioned above, rationales may not
have been forefront in students’ minds when completing the team worksheet as they had already
used their rationale to justify the individual answers.

Quality of Team Responses
Quality of Individual Responses

15% Poor 14% Poor
30% Good Good
43%
Excellent Excellent

43%
55%

Figure 3. Results of Response Evaluations, Question 2.

Even though this question required only a handful of specific parameters to be considered to give
a reasonable answer, a large volume of unique responses were given. Much irrelevant
information was given. It is suggested that students, not feeling confident enough with their
answers or lacking much of a response, listed irrelevant information (including factors
concerning nuclear waste, climate and natural disasters, and material costs) in order to “fluff up”
their response and give the impression of deep thinking. In reality, these factors are unlikely to
produce an effect on the strength of the cement mortar. These factors persisted into the team
responses, suggesting that some students were concerned about making their answers look
“good”, more so than providing rational and defensible answers.
A notable misconception observed in the conduct of this activity is that of the true size of
multiwalled carbon nanotubes. An example of this is in a description of how the student imagines
these nanotubes to entirely “plug” the pores between cement particles. This would require the
nanotube width to be comparable to the size of such pores, when in reality they are much smaller
(typically one to three orders of magnitude, depending on the size of the void). Looking at this
student’s response to Question 1, the student depicted the pores between cement particles to be
the same size as the width of the nanotubes, which were labeled as 1 nm wide. This idea
persisted in the team response for the student’s team but was not as explicitly observed in any
other responses. The student did not provide an answer to the driving question, but the student’s
team listed the highest value among team responses, 2.54 wt%. However, most students
indicated that the amount of nanotubes needed would be relatively small; but this still appeared
to be a common misconception among students, which was to be expected since the one of the
objects of the activities was to expose students to the scale and size of nanomaterials and how
these sizes relate to other materials they are familiar with.
Each unique factor given in Question 2 (“What information do you think you need to know to
answer the driving question? List all of the factors and/or variables that are needed and identify
which ones are the most important. For each factor/variable listed, offer a rationale for why it is
needed (for example, through which mechanism/s does it affect strength enhancement?)”) was
recorded and the number of occurrences per factor was assessed. Sixty-nine unique factors were
provided in response to this question. Many of these only occurred once or twice. Therefore,
word clouds were utilized to provide a visual representation of these responses. These word
clouds show the top 40 most frequent words offered in response to Question 2, excluding
common English words, such as “the” or “and”, that occurred in the responses analyzed at least
three times. Word clouds for all responses for Question 2 are presented in Figure 4(a) through
Figure 4(c).

Figure 4(a). Word Cloud for Question 2 Overall Responses.
Figure 4(b). Word Cloud for Question 2 Individual Responses.

Figure 4(c). Word Cloud for Question 2 Team Responses.

Besides the obvious frontrunners of “strength” and “nanotubes”, it is of interest to note that
words like “compressive”, “density”, “aggregate” and “water” also stand out. Comparing the
individual response word cloud to the team response word cloud, the responses seem to show
some evidence of “fine-tuning”, that is, certain words show an increase in usage (larger and
darker print) and less frequent words are represented in even smaller print, showing that less
relevant factors were starting to be eliminated in the team responses. To this end, further
evidence is given by the presence of fewer words in the team response word cloud.

Conclusions, Recommendations, and Upcoming Work
One of the main objectives of Phase 1 of this project is to answer the question, “How does
exposure to new information, and subsequently learning, visually manifest itself through student
performance on questions requiring schematic responses?” This requires the researcher to make a
decision as to whether they want to measure student performance or student learning and
academic growth. This is a complicated decision – can it be reasonably assumed that improved
student performance is equal to learning? If not, how can we detect and quantifiably measure
learning and academic growth? Therefore an assumption needs to made in order to gain any
value from the data presented here; for this project it is assumed that improved student
performance, gained after exposure to new material, indicates at least the presence of learning
and academic growth – that is to say that students needed to absorb and apply this new material
in order to give responses that increased in quality from the first activity to the last. This is, of
course, not suggesting that those students who did not show improvement did not “learn”
anything during the activities. It is impossible to attribute the decrease in a few students’
performances to any one factor, but group dynamics and the design of the activities themselves
may have influenced them. Considering these assumptions, rubrics were utilized to uniformly
measure the change in student performance, and showed that as a class, students made
encouraging progress and demonstrated some evidence of learning, equated to an approximate
average of 8 points on a 100-point scale.
Although the results show that the activities brought an increase in student performance roughly
equivalent to 8-10 points, depending on the depth of evaluation, better design of the activity may
result in a greater increase in performance and subsequently, student understanding. For
example, instructing students to repeat the same activity from one week to the next may cause
some detrimental effects on performance because of students’ attitudes towards the activity.
Students, not understanding the need for repetition the next week, may put more effort into the
first iteration of the activity than the second, possibly accounting for the drop in student
performance as noted in the overall improvement results from the Phase 1 sketches. However, if
the activity were designed so that the question was posed for the first time, students were
exposed to additional information, and then asked during the same session to repeat the activity
using the new information or to correct their first response, students may be allowed to make the
connection between what they are doing and the need for repetition in the activity in order to
identify learning. This method could also help highlight students’ own learning for themselves.
It is also recommended that the activity be modified to encourage more academic growth in
understanding the nanoscale. This was a main focus of the activity, yet it showed the least
improvement in the schematic responses, indicating widespread misconception of the size of
nanoparticles even after the introduction of new material. Suggestions include asking students to
relate the nanoscale to a different scale they may be more familiar with, equate one or more
elements of the schematics to an everyday object for comparison, specifically ask students to
provide the number of orders of magnitude between different elements, or ask students to correct
their responses after providing a piece of information or hint.
In regard to Phase 2 of the project, examining the impacts of team interactions, it may be of
interest to insert a third step between the individual and team activities, where students are paired
with only one other student before forming the larger teams, using a sort of modified “think-pair-
share” method. This technique may help to pinpoint how specific group interactions impact
responses. Directly asking for feedback from students could also be considered in evaluating
group dynamics, such as requiring rationale on why groups chose their combined responses, or
even specifically asking students to evaluate the group dynamics of their team at the end of the
team activity.
As this is an ongoing project, the next phases will comprise the following activities.
 Tracking the evolution of the responses across both sessions to evaluate depth of student
thinking by: (a) comparing individual responses from Session I, team responses from Session
I, and individual responses from Session II, to see how responses changed between each
iteration of the active learning exercises; and (b) comparing students’ responses during each
iteration to the information presented before each exercise (i.e., videos, presentation,
previous knowledge, etc.) to see how much and what kind of new information was retained
or used by students.
 Identifying possible thinking patterns in individual and group settings by: (a) identifying
possible factors affecting students’ thinking and patterns evident in team and individual
responses; (b) developing possible indicators of critical thinking in written responses.
 Evaluating student performance based upon written responses by: (a) defining performance
measures to recognize levels of critical thinking from written responses; and (b) drawing
conclusions on the effectiveness of problem-based, active learning in the classroom and the
feasibility of evaluating critical thinking from written responses.
Acknowledgements
This material is based upon work supported by the U.S. Department of Energy Office of Science,
Office of Basic Energy Sciences and Office of Biological and Environmental Research under
award number DE-SC-00012530, and the U.S. National Science Foundation, Division of
Engineering Education and Centers under award number 1343756.
References
[1] Felder, R.M., Woods, D.R., and Stice, J.E. (2000), “The Future of Engineering Education II:
Teaching Methods that Work,” Chemical Engineering Education, 34(1), 26-39.
[2] Hannafin, M.J., and Land, S.M. (1997), “The Foundations and Assumptions of Technology-
Enhanced StudentCentered Learning Environments,” Instructional Science, 25(3), 167-202.
[3] Willingham, D.T. (2007), “Critical Thinking: Why Is It So Hard to Teach?,” American
Educator, AFT, Summer Issue, 8-19.
[4] Berge, N.D., and Flora, J.R.V. (2010), “Engaging Students in Critical Thinking: An
Environmental Engineering EFFECT,” Proc. 117th ASEE Annual Conference & Exposition,
Louisville, KY, AC 2010-1752, 10 p.
[5] Pierce, C.E., Caicedo, J.M., Flora, J.R.V., Berge, N.D., Madarshahian, R., and Timmerman,
B. (2014), “Integrating Professional and Technical Engineering Skills with the EFFECTs
Pedagogical Framework,” International Journal of Engineering Education, 30(6B), 579-589.
[6] Pierce, C.E., Gassman, S.L., and Huffman, J.T. (2013), “Environments for Fostering
Effective Critical Thinking in Geotechnical Engineering Education (Geo-EFFECTs),” European
Journal of Engineering Education, 38(4), 281-299.
[7] Matta, F., and Pierce, C.E. (2014-2015), “Decision Worksheet: Portland Cement Mortar for
Nuclear Waste Storage.” ECIV 303L Civil Engineering Materials Laboratory [Class handout].
University of South Carolina, Columbia, SC.