You are on page 1of 24

Journal of Engineering Education

January 2012, Vol. 101, No. 1, pp. 95–118


© 2012 ASEE. http://www.jee.org

Which ABET Competencies Do Engineering


Graduates Find Most Important in their Work?
HONOR J. PASSOW
Dartmouth College

BACKGROUND
ABET-accredited engineering programs must help students develop specific outcomes (i.e., compe-
tencies). Faculty must determine the relative emphasis among the competencies. Yet, information is
sparse about the relative importance of each competency for professional practice.

PURPOSE (HYPOTHESIS)
This study synthesizes opinions of engineering graduates about which competencies are important for pro-
fessional practice.

DESIGN/METHOD
A survey asked undergraduate alumni of a large public university in the Midwest to rate the importance of
the ABET competencies in their professional experience. Responses included descriptions of education,
post-graduate work environment, and demographics. Protected, post-hoc, all-pairwise multiple compar-
isons determined patterns in the importance ratings, for the aggregate, and for descriptive subgroups.

RESULTS
The lowest-rated competency’s mean rating was 3.3 out of 5. Graduates of 11 engineering majors rated a
top cluster of competencies (teamwork, communication, data analysis, and problem solving) significantly
higher than a bottom cluster (contemporary issues, design of experiments, and understanding the impact
of one’s work). Importance ratings of five other competencies fell in an intermediate cluster in which
importance was statistically tied to either the top or bottom cluster, depending on work environment or
academic discipline-not demographics. The clusters were stable over time, that is, over seven survey
administrations (1999–2005), years since graduation (0, 2, 6 & 10), and graduation year (1989–2003).

CONCLUSIONS
Graduates across engineering disciplines share a pattern of importance for professional practice among the
ABET competencies that is statistically significant, consistent across demographic variables, and stable
over time. This pattern can inform faculty decisions about curriculum emphasis within and across engi-
neering disciplines.

KEYWORDS
competencies, engineering, professional practice

INTRODUCTION

Needed: Graduates’ Opinions to Inform the Design of Curricula


Since the advent of ABET Engineering Criteria 2000 (EC2000), accredited engineer-
ing programs have been required to help students develop specific program outcomes (i.e.,
competencies). The relative emphasis among these competencies, however, has been left
for each program to determine. In fact, the competency focus “has significant implications

95
Journal of Engineering Education 101 (January 2012) 1

for what knowledge and skills faculty need” (Doherty, Chenevert, Miller, Roth, &
Truchan, 1997, p. 182). Under the EC2000 paradigm, faculty need the ability to envision,
collectively articulate, and prioritize the competencies that students should gain from the
educational program before they graduate in order to prepare for a myriad of career paths
(Passow, 2008). Despite the need, information to support faculty decisions about relative
emphasis among competencies is sparse.
The relative emphasis in the curriculum should be informed by the relative importance
of the competencies required for professional practice. According to the National Engi-
neering Education Research Colloquies: “Students and employers alike expect a high de-
gree of synergy between what is learned in [the] classroom and what is needed in the field
for successful practice” (The Steering Committee of the National Engineering Education
Research Colloquies, 2006). With the worldwide move toward outcomes-based quality as-
surance in engineering education, a body of literature has established lists of competencies
that are important for engineering practice (e.g., American Society for Engineering Educa-
tion, 1994; Crawley, Malmqvist, Ostlund, & Brodeur, 2007; Cupp, Moore, & Fortenberry,
2004; De Graaff & Ravesteijn, 2001; McMasters & Komerath, 2005; Mickelson, 2001,
2002; National Research Council, 1995), including two reviews of published lists
(Alkhairy et al., 2009; Woollacott, 2009) and the Engineer 2020 Report (National Acade-
my of Engineering, 2004). A subset of these competencies has been adopted by accredita-
tion agencies, such as ABET (Prados, Peterson, & Lattuca, 2005). Other lists of compe-
tencies have been developed for specific engineering fields, such as civil engineering
(American Society of Civil Engineers, 2007), environmental engineering (AAEE Envi-
ronmental Engineering Body of Knowledge Working Group, 2008), design engineering
(Davis, Beyerlein, & Davis, 2006), and engineering management (Merino, 2006). How-
ever, for faculty in a program to design a curriculum that “support[s] the integration of
knowledge, skills…, and …values necessary for today’s professional practice” (Sheppard,
Macatangay, Colby, & Sullivan, 2008, p. 4), they need to know the relative importance
among these competencies for engineering practice. Simple lists of the competencies that prac-
ticing engineers believe are “required to satisfy the needs of the profession” (Prados et al.,
2005, pp. 177–178), do not include relative importance in practice.
Naturally, outcomes-based quality assurance has prompted many educational programs
to survey employers about gaps in their curriculum by asking: “Rate the adequacy of our
graduates’ performance on this competency” for each competency in a list. Many of these
results have been published (e.g. Abu-Eisheh, 2004; Brumm, Hanneman, & Mickelson,
2005; Burtner & Barnett, 2003; Katz, 1993; Lucena, 2006; Martin, Maytham, Case, &
Fraser, 2005; Sageev & Romanowski, 2001). A gap is, by definition, the relative difference
between a target for competent performance and an actual performance. A list of compe-
tency gaps, ordered from the largest gap to the smallest, can prioritize areas for improve-
ment for an existing program but cannot generalize to other programs. It is the relative im-
portance of the target competencies themselves—not the gaps—that is needed for
designing programs. According to product design faculty Ulrich and Eppinger (1995), a
list of “relative importance of the needs” (p. 48) is an essential precursor to establishing
specifications for a design. They assert that needs should be expressed in terms of “what the
product has to do” and should “use positive, not negative, phrasing.” A list of competency
gaps does not meet these requirements, while a list of relative importance of the target
competencies does.
A body of literature has focused directly on the relative importance of the competencies
required for successful engineering practice. These studies have used surveys that asked

96
101 (January 2012) 1 Journal of Engineering Education

respondents to rate (or rank) the importance of each competency in a list. Of the 19 “impor-
tance” studies published since 1990, thirteen included a balance of technical and profes-
sional competencies (ASME, 1995; Bankel et al., 2003; Benefield, Trentham, Khodadadi,
& Walker, 1997; Evans, Beakley, Crouch, & Yamaguchi, 1993; Koen & Kohli, 1998;
Lang, Cruse, McVey, & McMasters, 1999; Lattuca, Terenzini, & Volkwein, 2006;
National Society of Professional Engineers (NSPE), 1992; Nguyen, 1998; Saunders-Smit,
2008; Shea, 1997; Turley, 1992; World Chemical Engineering Council, 2004) while six
included only professional competencies (de Jager & Nieuwenhuis, 2002; Donahue, 1997;
Kemp, 1999; Meier, Williams, & Humphreys, 2000; Sardana & Arya, 2003; Scott &
Yates, 2002). A balanced inclusion of both technical and professional competencies is es-
sential to inform the design of entire curricula. Each of the 13 studies with balanced inclu-
sion of competencies reported their results as an ordered list of competencies from the
highest average importance to the lowest. Each study displayed and discussed the differ-
ences in average importance among the competencies; however, none of them tested the
differences among ratings of the various competencies for statistical significance. Of the 13
studies, 11 examined the difference in importance rating for various groups of respondents
for each competency, but only three of these studies tested group-to-group differences sta-
tistically. None of the identified studies addressed trends in importance ratings over time.
For faculty who are making decisions about curricular design, it is essential to have confi-
dence in the relative levels of importance of the various competencies. This study addresses
these limitations in prior work by addressing the following questions: Which ABET com-
petencies do engineering graduates rate as most and least important for professional prac-
tice? Are differences in importance ratings (1) statistically significant, (2) consistent across
groups of respondents, and (3) stable over time?

Theory and Literature


The research questions should be refined after considering relevant theory and pub-
lished studies. In this study, competencies are defined as the knowledge, skills, abilities, atti-
tudes, and other characteristics that enable a person to perform skillfully (i.e., to make sound deci-
sions and take effective action) in complex and uncertain situations such as professional work, civic
engagement, and personal life. In this definition, knowledge includes all the types of knowl-
edge defined by Anderson, et al.’s (2001) taxonomy: factual knowledge (terminology and
details), conceptual knowledge (classifications, principles, theories, and models), procedural
knowledge (knowing both how and when to use specific skills and methods), and meta-
cognitive knowledge (self-knowledge and both how and when to use cognitive strategies for
learning and problem-solving). This definition of “competencies” draws on the scholarly
description of competency and performance by the faculty of Alverno College (Men-
tkowski & Associates, 2000) and other international leaders in the field of competency-
based (also called ability-based) higher education (Heywood, 2005; Hutcheson, 1997).
The definition includes language from the field of industrial psychology (Bemis, Belenky,
& Soder, 1983; Ghorpade, 1988; Whetzel, Steighner, & Patsfall, 2000) and higher educa-
tion for the professions (Curry & Wergin, 1997). Thus, in this study, competencies—and
the related term “learning outcomes”—are actual skills and abilities that graduates demon-
strate at the end of their undergraduate program. Competencies are distinct from inputs:
what subjects are taught and the amount of class time spent on each subject. This study as-
sumes that competencies, as opposed to educational credentials alone, are the foundation
of successful professional practice throughout a career, an assumption shared with agencies
that grant licenses for individuals to practice professions (Continuing Professional

97
Journal of Engineering Education 101 (January 2012) 1

Education Development Project (University of Pennsylvania), 1981; Houle, 1980; Larson,


1983; Office of the Professions, 2000; Pottinger & Goldsmith, 1979).
The research questions can be focused using two findings from published research.
First, the overall pattern of importance in competencies depends on the practice setting.
That is, different academic disciplines and work environments require different competen-
cies and different relative importance among them. For example, Holland’s (1997) theory
of vocational behavior has been empirically confirmed in many studies (e.g., Assouline &
Meir, 1987; Smart, Feldman, & Ethington, 2000). According to Holland’s theory, each
environment, whether it is a work environment or an academic discipline, has a distinctive
pattern of competencies, values, attitudes, interests, and self-perceptions. These distinct
patterns are maintained and transmitted through self-selection for congruent individuals
and through socialization for all individuals regardless of congruence. A second theory
called “models for superior performance” (Spencer, McClelland, & Spencer, 1994) synthe-
sizes 286 studies of the characteristics of people who perform well at various jobs. Its two
central findings are (1) that there is a different overall pattern of importance among the
competencies for different jobs and (2) that generic models will not fit any specific job per-
fectly because there is such variation in the competencies required in different jobs. A third
theory, by Stark, Lowther, and Hagerty (Stark, Lowther, & Hagerty, 1986), was devel-
oped empirically from a survey of 2,217 college faculty in professional fields including den-
tistry, medicine, nursing, architecture, business, engineering, law, and journalism. The
Stark framework posits that patterns of importance among competencies are based on pro-
fessional field. Thus, three theories predict that differences in the pattern of importance
ratings of competencies are based on work environment and academic discipline. Six sur-
vey studies have validated this general theoretical prediction for a variety of engineering
practice settings (ASME, 1995; Bankel et al., 2003; Evans et al., 1993; Saunders-Smits,
2005, 2007; Shea, 1997). For example, importance ratings among competencies differed
significantly between industrial engineers and manufacturing engineers (Shea, 1997) and
between aerospace engineering specialists and aerospace engineering managers (Saunders-
Smits, 2005, 2007).
Second, importance ratings may depend on survey wording. Shea’s survey (1997) asked
each respondent to (1) rate each competency and (2) choose the single most important
competency. In the ratings, communication was most important. Yet, in the ranking,
communication was third behind problem solving and teamwork. This raises the question,
how do the results change with different survey wordings?
At this point, the research questions can be refined to reflect the relevant theory
and literature. This study addresses limitations in prior work by addressing the ques-
tions: Which ABET competencies do engineering graduates rate as most and least
important for professional practice? Are the differences in importance ratings (a) sta-
tistically significant, (b) consistent across practice settings, undergraduate engineering
fields, and demographic groups, (c) stable over time, and (d) consistent across differ-
ent wordings of a competency?

METHODS AND THEIR LIMITATIONS

To address these research questions, a questionnaire was distributed to recent un-


dergraduate alumni (up to 10 years after graduation) of a large public university’s Col-
lege of Engineering in thirteen engineering disciplines. We will call it “Midwestern U.”
The question of interest on the survey was “Please rate how important the following

98
101 (January 2012) 1 Journal of Engineering Education

competencies and attitudes have been to you in your professional experience.” See
Table 1 for a list of the 12 ABET competencies. The principal aim of the analysis was
to determine the overall pattern of differences among the importance ratings of compe-
tencies for the sample as a whole. The secondary aim of the analysis was to compare
patterns of importance ratings for each subgroup to the pattern for the sample as a
whole. Sub-groups were defined using 17 survey questions: educational experience
(7 questions), work experience after graduation (8), gender (1), and race (1). The analy-
sis statistically tested the null hypothesis that there are no differences in the median im-
portance ratings for the various competencies.

TABLE 1
The Survey Question of Interest for the Study, Verbatim

99
Journal of Engineering Education 101 (January 2012) 1

Sample and Data Collection


For this study, the target population is engineering graduates in the U.S. A fundamental
question is, “How well does the sample represent the population with respect to ratings of
importance for various competencies?” This is a question about limitations to generalizabili-
ty, which for sampling can be broken into three types of potential bias. Coverage bias is intro-
duced when people in the target population are left out of the list from which the sample is
selected. Selection bias is introduced when some people on the list don’t have a fair chance of
receiving the survey. Non-response bias is introduced when people who receive the survey do
not respond. These potential limitations will be discussed along with the methods.
The data were collected in seven annual cycles (Table 2). Each year, three graduating
classes were selected—two years since graduation, six years since graduation, and ten years
since graduation. One aspect of the subgroup analysis additionally used data from an annu-
al survey of graduating seniors. Each year, the alumni survey mailing list was composed of
every living graduate with an active address in the alumni database (approximately 96% of
the original graduating classes). The senior survey was distributed to every degree applicant
prior to graduation. Thus, selection bias was minimal: only alumni who had removed
themselves from the alumni database were omitted from the selected sample. The overall
response rate for the seven, annual administrations of the alumni survey was 21%, with
4,225 responses from 20,081 degrees granted in the target years. The overall response rate
for the three, annual administrations of the senior survey was 50%, with 1,628 responses
from 3,288 degree applications. Sub-sample sizes by survey year, by survey wording, and by
alumni year are shown in Table 2. Sub-sample sizes by engineering discipline were as fol-
lows (n  original wording  revised wording  total): aerospace engineering (n  196 
193  389), chemical engineering (n  285  328  613), civil engineering (n  192 
194  386), computer engineering (n  148  238  386), computer science (n  0  24
 24), electrical engineering (n  253  257  510), industrial engineering (n  256 
384  640), marine engineering (n  74  58  132), materials engineering (n  71 
96  167), mechanical engineering (n  514  650  1164), and nuclear engineering
(n  26  33  59).
Such low response rates can limit generalizability. Because non-response bias was a
concern, demographic data from the sample was compared to demographic data in institu-
tional records. To determine discrepancies between the observed and expected frequencies
on specific variables, the chi-squared test was used. The chi-squared tests indicated that
weighting to reduce non-response bias was not needed for undergraduate grade-point-av-
erage, but should be considered for gender (alumni and senior surveys), race (alumni and
senior surveys), year of graduation (alumni survey only), semester of graduation (senior sur-
vey only), undergraduate major (alumni survey only), and how the graduate entered the
college (senior survey only). The tradeoff between reducing non-response bias and the sta-
tistical instabilities introduced by differentially weighting cases was carefully considered in
consultation with Eric Dey, an expert on using weights to mitigate the effects of non-re-
sponse in survey research. The following variables were weighted because of their high
likelihood of affecting the analysis: gender, race, and year (or semester) of graduation. Cus-
tomary procedures for multivariate weighting and normalizing (Dey, 1997) were based on
three-dimensional tables: gender by race by year (or semester) of graduation.
As described previously, selection bias was minimal and non-response bias was re-
duced. The question of coverage bias remains, “Which engineering graduates in the U.S.
were left out of the list from which the sample was selected?” Of the engineering bachelor’s
degrees awarded annually in the U.S (Gibbons, 2009), 87% are granted in the thirteen

100
101 (January 2012) 1 Journal of Engineering Education

TABLE 2
Sample Design and Sample Size for the CoE Alumni and Senior Populations

majors offered at Midwestern U. Therefore, Midwestern U offers reasonable coverage of


the undergraduate engineering fields in the population: aerospace engineering, biomedical
engineering, chemical engineering, civil engineering, computer engineering, computer sci-
ence, electrical engineering, industrial engineering, marine engineering, materials engi-
neering, mechanical engineering, nuclear engineering, and interdisciplinary engineering.
Yet, restricting the sample to recent graduates of a single institution is cause for serious

101
Journal of Engineering Education 101 (January 2012) 1

concern about coverage bias. Because theory and literature indicate that the overall pattern
of importance in competencies depends on the practice setting, occupations in the sample
were compared to occupations of engineering graduates in a nationally representative data
set. A 1999 survey of 24,716 U.S. residents under age 75 holding an engineering degree
(bachelor’s or higher) asked the question: “If you are employed or self-employed, which
category below BEST describes your job?” Respondents then identified their job code
from a two-page list. The National Science Foundation (NSF) weighted the survey re-
sponses to represent the estimated population of 2.3 million engineering graduates em-
ployed in the U.S. in 1999 (Kannankutty & Wilkinson, 1999). After obtaining the data
set from the NSF, we organized the occupations into categories (Figure 1) in order to
build a survey question about occupations for Midwestern U’s engineering alumni.
When our sample was collected, we compared the NSF national proportions of engi-
neering graduates in various occupations (Figure 1) with the occupation data from our
Midwestern U sample (Figure 2). There is striking similarity. The distributions are simi-
lar, except the Midwestern U data has a noticeably smaller portion of non-engineering sci-
ence and technology occupations. The close percentages for management are surprising
because the Midwestern U data is limited to respondents 10 years after graduation while
the NSF data includes engineering graduates up to 75 years old. Taken altogether, it ap-
pears that the Midwestern U sample is fairly representative of the U.S. population of engi-
neering graduates with respect to occupation, which reduces one aspect of coverage bias.
Yet, coverage bias may still limit generalizability: the recent alumni in our sample may un-
derestimate the importance of competencies that are essential later in a career, and the re-
spondents’ importance ratings may be biased by institution-specific values they adopted
during their undergraduate years.

Data Analysis
The principal aim of the analysis was to determine the overall pattern of differences
among the importance ratings of competencies for the sample as a whole. This is not a
common analytic goal, so the methods require explanation. One analytic question for the
main effect was: Taking every respondent to the revised wording of the survey (n  2115)
as a single group, do their importance ratings for the 12 competencies differ, and if so, what is the
pattern of differences among the competency ratings? For this question, the competencies are
analogous to 12 experimental treatments. Because each survey respondent rated all 12 com-
petencies, the ratings are not independent for each competency. Thus, in statistical termi-
nology pertaining to the design of experiments, each respondent is a block and each compe-
tency is a treatment, which makes this a two-way layout or two-way classification. The
two-way layout helps control for variations among raters (harsher and more generous
raters), and therefore substantially reduces the chance of failing to detect a difference when,
in truth, a difference exists (Type II error) (e.g., Spiegel, 1990; Trumbo, 2002). Also, be-
cause the respondents chose their own ratings, the ratings are random variables for each of
the 12 fixed competencies (e.g., Devore, 1995; Hogg & Ledolter, 1987). In addition, be-
cause there are 12 competencies (or treatments) to compare, a post-hoc multiple compari-
son procedure is recommended for minimizing false detections, in other words detecting a
difference when, in truth, no difference exists (Type I error) (e.g., Hsu, 1996; Miller, 1981;
Trumbo, 2002). Therefore, if the data were normally distributed, the two-step analytic ap-
proach would be to use analysis of variance (ANOVA) to find out if any differences exist
among the ratings of the competencies, and if so, perform Tukey’s multiple comparison test

102
101 (January 2012) 1 Journal of Engineering Education

FIGURE 1. Occupations of engineering graduates, national estimates. Percentages are


based on the National Science Foundation (NSF) estimate of 2.3 million engineering
graduates employed in the U.S. in 1999. Source: SESTAT 1999 - a nationally representa-
tive probability sample of self-reports from 24,716 engineering graduates weighted for
non-response bias.

to determine the pattern of differences in the importance ratings, that is, which importance
ratings differ with statistical significance.
However, histograms of the ratings for the various competencies show that the rat-
ings are not normally distributed. The respondents predominantly used the upper end of
the 5-point rating scale, so the ratings are highly skewed and show dramatic ceiling ef-
fects. A statistical consultant suggested a non-parametric approach (as opposed to data
transformations) to accommodate the skew while maintaining the inherent meaning of
the ratings. Consequently, because normal distributions cannot be assumed, only non-
parametric statistics were used (e.g., Daniel, 1990; Trumbo, 2002). Thus, statistically
speaking, the appropriate analysis is nonparametric, post-hoc multiple comparisons of
location for a mixed-effects, complete block, two-way layout. The Friedman’s rank sums
test is the “nonparametric analogue of the parametric two-way analysis of variance”
(Daniel, 1990, p. 262) most commonly used in practice (e.g., Hsu, 1996; Miller, 1986;
Zar, 1999). The multiple comparison test based on the Friedman rank sums proposed by
Nemenyi (1963) is widely recommended for non-parametric multiple comparisons for
complete blocks in a two-way layout (e.g., Daniel, 1990; Hollander & Wolfe, 1973;
Miller, 1981, 1986; Oude Voshaar, 1980; Zar, 1999).

103
Journal of Engineering Education 101 (January 2012) 1

US Engineering Graduates Midwestern U Respondents

59%
55%

14% 16% 14% 13%


7% 7% 6% 8%

Other
Engineer

science/ technology

Marketing/sales

Manager
Non?engineering

work

FIGURE 2. Occupations of Midwestern U alumni survey respondents compared with na-


tional estimates. The Midwestern U survey item was “If you are employed or self-em-
ployed, which category below BEST describes your job?” Percentages are based on the
total of 1910 Midwestern U alumni responses to this question (2002–03 to 2005–06)
weighted for non-response.

The distribution-free Friedman rank sum test was used to test the null hypothesis
that the population distributions for the treatments are the same (Wagner, 1992), or
more specifically that the medians of all the treatments are equal (Daniel, 1990;
Trumbo, 2002). For each respondent, the importance ratings of the competencies
were ranked from 1-12 (with rules for ties); these ranks are the data used in the test.
The technique, proposed by Friedman (1937, 1940), assumes that (1) the blocks (re-
spondents in this analysis) are mutually independent, (2) the variable of interest (im-
portance rating in this analysis) is continuous, (3) there are no interactions between
blocks and treatments (between respondents and competencies in this analysis), and
(4) the observations for each block (or respondent) may be ranked in order of magni-
tude (Daniel, 1990). Assumptions 1, 3, and 4 are satisfied. Assumption 2 is not met
because the 5-point rating scale is discrete, not continuous. However, there are no
nonparametric alternatives for non-continuous data, so we used the Friedman’s rank
sums test with the customary 5% chance of a Type I error (  .05) despite the limita-
tion introduced by violating Assumption 2.
Nemenyi’s multiple comparison tests the following null hypothesis: the distributions
are the same for specific pairs of treatments (competencies in this analysis). As with any
multiple comparison procedure, the critical values are chosen to limit the Type I error rate
for the entire analysis instead of for each individual comparison. In this analysis, the level of
significance (studywise   .05) is split among the 66 comparisons (comparing each com-
petency with each of the other 11 competencies). For a group of respondents, the mean
rank for each competency was used in the large sample formula for Nemenyi’s test (Miller,
1981, equation 131, p. 174).

104
101 (January 2012) 1 Journal of Engineering Education

-
where is Ri the mean rank for competency i, k is the number of competencies in the analy-
sis, n is the number of respondents in the analysis, and qk, is the percentage points of the
Studentized range (Miller, 1981, Table B1, p. 234–237). The results of this test are dis-
played graphically as “tie-lines” on a graph, which tie together competencies whose impor-
tance ratings do not differ with statistical significance. In summary, if any significant differ-
ences exist according to the Friedman rank sums test (  .05), the Nemenyi multiple comparison
test (studywise   .05) reveals the overall pattern of differences in the importance ratings of the
competencies (specifically, differences in individuals’ ranks of the competencies averaged across all
the individuals in the group).
The secondary aim of the analysis was to compare patterns of importance ratings for 133
sub-groups to the pattern for the sample as a whole—the aggregate pattern. The 133 sub-
groups were gender (2 groups; male and female), race (6), data collection year (7), graduation
year (15), alumni year (i.e., years since graduation) (3), mode of entry into engineering (e.g.,
freshman declaration, transfer student…) (4), single-major or double major (2), undergraduate
major (11), grade point average (4), satisfaction with undergraduate experience (5), satisfaction
with career services (5), number of additional degrees (3), additional degrees (8), number of
employers since graduation (7), employment status (4), employer’s business (18), type of job
(6), type of engineering job (13), professional registration status (2), and income (8).
For each of the 133 groups, the Friedman rank sum test and the Nemenyi multiple
comparison test were conducted to determine the pattern of differences in the importance
ratings for that group. The additional step in the subgroup analysis was to identify sub-
groups whose pattern of ratings differed from the aggregate pattern with statistical significance.
Graphing by sub-groups showed differences in rating patterns among sub-groups. On first
exploration, the sub-group data appeared to be a dizzying jumble of deviations from the
overall pattern of differences among ratings of competencies for the entire data set. How-
ever, a strong underlying pattern was evident when the competencies were grouped into
three clusters. A top cluster (teamwork, communication, data analysis, and problem solv-
ing) had top ratings in all but two of the 133 groups. A bottom cluster (contemporary is-
sues, experiments, and impact) had bottom ratings in all but a few groups. An intermediate
cluster (“math, science, and engineering knowledge”, ethics, lifelong learning, design, and
engineering tools) had ratings that ranged widely, depending on the subgroup. From these
observations, we developed a criterion for a difference from the aggregate pattern, based on
the “tie” lines for the sub-group: if any competencies from the top and bottom clusters were
“tied” according to the Nemenyi multiple comparison test, that group’s pattern was said to
differ from the aggregate pattern with statistical significance.

RESULTS AND DISCUSSION

This study pursued the question: Which ABET competencies do engineering gradu-
ates rate as most and least important for professional practice? Data to answer this question
came from a survey of recent alumni of a large Midwestern University’s College of Engi-
neering, Respondents rated the importance of the ABET competencies in their profes-
sional experience.

105
Journal of Engineering Education 101 (January 2012) 1

5
Overall means (n = 2115 recent
engineering alumni, revised
wording)

4
Mean Importance Rating

Notes
Ratings
5 = "extremely important"
4 = "quite important"
3 3 = "somewhat important"
2 = "slightly important"
1 = "not at all important"

All competencies in this study have


mean ratings greater than "somewhat
2 important".

Tie lines show 6 statistically distinct


levels of importance (studywise α =
0.05). Horizontal "tie lines" above the
1 data "tie together" competencies
b2) data analysis

j) contemporary
c) design

k) engineering tools
whose ratings are not significantly
i) life-long learning

f) ethics

a) math, science &

h) impact
e) problem solving
d) teams

b1) experiments
g) communication

different. (Specifically, each


engineering

issues
respondent's ratings were ranked
from 1 (highest rating) to 12 and
differences were tested between the
mean rank for each competency over
all respondents.)

ABET Competencies a-k

FIGURE 3. Importance ratings for the ABET competencies, revised wording. The survey
question was: “Please rate how important the following competencies and attitudes have
been to you in your professional experience.” Verbatim competencies are in Table 1.

Significance of Differences in Importance Ratings


Levels of importance for the various competencies differed with statistical significance
(studywise   .05). The statistical “tie lines” for all 2115 respondents to the revised wording
are in Figure 3. (Note: “ABET Engineering Criterion 3 Program Outcomes a-k” has been
shortened to “ABET Competencies a-k.”) Although the mean rank of the ratings for each
competency was tested, the graphs show mean ratings to allow useful interpretation with
respect to the survey question because plots of mean ranks would be difficult to interpret. A
companion analysis for all 2110 respondents to the initial wording was similar. Although the
mean importance ratings shifted a little between the two analyses and the “tie lines” shifted a
little as well, the importance levels of the top-cluster competencies were never “tied” with the
bottom-cluster competencies. The mean importance ratings ranged from 3.4 to 4.7 for the
initial wording and from 3.3 to 4.6 for the revised wording. In both surveys, the lowest-rated
competency had a mean rating well above 3 out of 5. This affirms the importance of the
ABET competencies for professional practice. Our minimum mean rating of 3.3 on a 5-
point scale falls in the range of lowest mean ratings for any ABET competency in other sur-
vey studies (2.5 to 3.9 on a 5-point scale) (Bankel et al., 2003; Benefield et al., 1997; Evans et
al., 1993; Koen & Kohli, 1998; Lang et al., 1999; Lattuca et al., 2006; National Society of
Professional Engineers (NSPE), 1992; Saunders-Smit, 2008; Shea, 1997).

Consistency of Importance Ratings Across Subgroups


The pattern of mean importance ratings was examined for all 133 subgroups, including
the 11 subgroups—undergraduate majors—shown in Figure 4. Note that the lowest mean
rating for a top-cluster competency was above the highest mean rating for a bottom-cluster
competency. The statistical independence of all the competencies in the top and bottom clus-
ters held for 117 out of the 133 subgroups (88%) (studywise   .05). A summary of the

106
101 (January 2012) 1 Journal of Engineering Education

5 Top Cluster Competencies

4.5 Aerospace Eng. (n=193)


Chemical Eng. (n=328)
Mean Importance Rating

Bottom Cluster Competencies Civil Eng. (n=194)


Computer Eng. (n=238)
4
Computer Sci. (n=24)
Electrical Eng. (n=257)
Industrial Eng. (n=384)

3.5 Marine Eng. (n=58)


Materials Eng. (n=96)
Mechanical Eng. (n=650)
Nuclear Eng. (n=33)
3

2.5
b2) data analysis

j) contemporary
c) design

k) engineering tools
i) life-long learning

f) ethics

a) math, science &

h) impact
d) teams

e) problem solving

b1) experiments
g) communication

engineering

ABET Comptencies a-k issues

FIGURE 4. Importance ratings for the ABET competencies by undergraduate major, re-
vised wording (survey years 2002–2005). Graduates of 11 engineering majors valued a top
cluster of competencies consistently higher than a bottom cluster.

clusters and statistically significant exceptions appears in Table 3. The cluster independence
criterion held for all time-related variables (survey year, graduation year, and years since grad-
uation) and all demographic variables (gender and race). The criterion also held for all levels
of many other variables: all methods of entry into an engineering major (as a freshman or
transfer student), all undergraduate grade-point-average levels, all numbers of majors (single-
majors and double-majors), all levels of satisfaction with career services, all numbers of gradu-
ate degrees (from 0 to 5), all numbers of employers since graduation, all job classifications (en-
gineer, science/technology work, marketing, or manager), all income levels, and both types of
professional engineering status (registered and unregistered). The cluster independence crite-
rion held for most undergraduate majors (9 out of 11), most levels of satisfaction with the un-
dergraduate experience (4 out of 5), most additional degrees (5 out of 8), most employment
status categories (3 out of 4), most employers’ business categories (13 out of 18), most engi-
neering job types (10 out of 13), and most levels of satisfaction with career (4 out of 5).
Only 16 out of 133 subgroups (12%) had a pattern of importance ratings that dif-
fered significantly from the aggregate with respect to the cluster independence criterion
(studywise   .05). All 16 of these differing groups can be categorized by practice set-
ting or undergraduate major, as was predicted by theory (Assouline & Meir, 1987;
Holland, 1997; Smart et al., 2000; Spencer et al., 1994; Stark et al., 1986) and prior
empirical work (ASME, 1995; Bankel et al., 2003; Evans et al., 1993; Saunders-Smits,
2005, 2007; Shea, 1997). Ten of the sixteen atypical subgroups rated “experiments”
equal with the top cluster. Competence in “designing and conducting experiments” is
prized at top-cluster importance among materials engineering majors, researchers in
any field (engineering faculty, engineering researchers, those holding a doctorate in en-
gineering, those who work at universities), and those involved with products that
require extensive testing (test engineers and those working in computer hardware,

107
Journal of Engineering Education 101 (January 2012) 1

TABLE 3
Engineering Graduates’ Ratings of Importance for ABET Competencies, Overall
Pattern, and Exceptions

medical devices, or communications). The other group that highly prized competence
with experiments was composed of those who were neither employed nor students,
which is actually the null option for work environment. Computer science majors rated
“contemporary issues” at a top-cluster importance level. Two medically-related groups,
those holding an M.D. degree and those working in the healthcare industry, rated

108
101 (January 2012) 1 Journal of Engineering Education

problem solving below the top-cluster and contemporary issues and impact at a
top-cluster level. Those holding a doctorate in a field outside of engineering rated “con-
temporary issues” and impact at a top-cluster importance level.
In summary, when there were differences from the aggregate pattern of importance rat-
ings (Table 3), they were associated exclusively with undergraduate major or work environ-
ment and not with time-related or demographic variables. This was predicted by Holland’s
(1997) theory of vocational behavior, the theory of “models for superior performance”
(Spencer et al., 1994), and Stark et al.’s (1986) framework of importance of competencies.
However, the differences from the aggregate pattern should not distract from the strong,
dominating aggregate pattern itself: the top-cluster competencies (teamwork, communi-
cation, data analysis, and problem solving) are universally deemed most important by engi-
neering graduates of all 11 engineering majors and 120 of the 122 other subgroups investi-
gated in this study. The bottom-cluster competencies (contemporary issues, experiments,
and impact) were rated as less important to professional practice than the top cluster com-
petencies by 117 out of 133 subgroups. The pattern of importance ratings among the com-
petencies is distinctive among the engineering disciplines primarily in the intermediate
cluster (“math, science, and engineering knowledge”, ethics, lifelong learning, design, and
engineering tools) which may be statistically tied with top-cluster or bottom-cluster com-
petencies, depending on the subgroup. Essentially, it is the intermediate cluster of compe-
tencies which “capture some of the subtleties of…variations in values” (Lattuca, Terenzini,
Harper, & Yin, 2010) among engineering disciplines and other subgroups, but graduates
of all engineering disciplines share a common value on the top-cluster competencies.

Stability of Importance Ratings Over Time


Importance ratings over time for the 4,225 responses are presented in Figure 5 for the
2,110 responses to the survey with the initial wording and the 2,115 responses to the survey
with the revised wordings. Note the clear distinction between the top-cluster competencies
and the bottom-cluster competencies. Also note that “data analysis” was not on the survey
until 2004.
The ratings in Figure 5 are stable: the lines that link the importance ratings for the
competencies across time are essentially horizontal and parallel. This graphical depiction
of stable ratings over time was statistically confirmed (studywise   .05). Importance
ratings for each competency over years since graduation (seniors, 2-year-alums, 6-year-
alums, and 10-year-alums) was more stable over time than Figure 5. In fact, the top-
cluster and bottom-cluster competencies that are graphically evident in Figures 4 and
5 remained statistically distinct across all years since graduation (seniors, 2-year-alums,
6-year-alums, and 10-year-alums) and across all graduation years (1989-2003) (study-
wise   .05). In short, the aggregate pattern of importance ratings is consistent over
time by several measures.

Consistency of Importance Ratings Across Wording


Wording changes altered the importance ratings. In 2002, the wording of three compe-
tencies changed: communication (top cluster), ethics (intermediate cluster), and life-long
learning (intermediate cluster). These three competencies changed importance ratings
when the wording was changed, but were essentially stable for a given wording. The word-
ing of the other eight competencies remained the same, and their importance ratings re-
mained stable across the wording change. Also in 2002, the wording of the “importance”
rating options changed for all competencies from options worded in terms of “usefulness” to

109
Journal of Engineering Education 101 (January 2012) 1

Initial wording † Revised wording ††


5.0
d) teamwork

g) communication

oral communication

4.5 written communication


Mean Importance Rating

b2) data analysis

e) problem solving

a) math, science & engineering


4.0
f) ethics

i) life-long learning

c) design

k) engineering tools
3.5
j) contemporary issues

b1) experiments

h) impact

3.0
1999 2000 2001 2002 2003 2004 2005
(n=857) (n=700) (n=553) (n=484) (n=682) (n=530) (n=419)
Survey Year

Note. † The response options in the initial wording were 5 = "a lways necessary," 4 = "often useful," 3 = "useful," 2
= "rarely useful," 1 = "never used, needed." †† The response options in the revised wording were 5 = "extremely
important," 4 = "quite important," 3 = "somewhat important," 2 = "slightly important," 1 = "not at all important."

FIGURE 5. Importance ratings for the ABET competencies by survey year. The survey
question was: “Please rate how important the following competencies and attitudes have
been to you in your professional experience.” Competency descriptions are in Table 1. In
the legend, the competencies are listed in descending order of aggregated importance rat-
ings for the initial and revised wordings combined. Note the separation between the top
cluster competencies (teamwork, communication, data analysis, and problem solving) and
the bottom cluster competencies (contemporary issues, experiments, and impact).

options worded in terms of “importance.” Thus, some wording changes led to importance
ratings that changed relative importance level with statistical significance (studywise
  .05), while others do not. This is consistent with findings in other studies. Shea’s (1997)
results show that relative importance ratings depend strongly on wording, while Bankel,
et al. (2003) reported minimal effects. Despite the wording changes, clusters of importance
ratings did not change.

CONCLUSIONS

Under EC2000, accredited engineering programs must help students develop specific
program outcomes (i.e., competencies), but the relative emphasis among these competen-
cies has been left for each program to determine. Leading engineering educators assert that
the relative emphasis in the curriculum should, among other considerations, be informed
by the relative importance of the competencies required for professional practice (e.g., Pra-
dos et al., 2005; The Steering Committee of the National Engineering Education Re-
search Colloquies, 2006). Yet, information about relative importance among competencies
is sparse. This study surveyed 4,225 recent engineering graduates of a single institution,
asking them to rate the importance of the ABET competencies in their professional

110
101 (January 2012) 1 Journal of Engineering Education

experience. Their responses were used to answer the question: Which ABET competen-
cies do engineering graduates rate as most and least important for professional practice?
Consistent with the findings in previous studies, mean ratings for all the ABET com-
petencies were above the middle of a five-point scale. The importance ratings for a top
cluster of competencies were statistically distinct from the ratings for a bottom cluster of
competencies with only a few exceptions (Table 3). With few exceptions, engineering
graduates value a top cluster of competencies (teamwork, communication, data analysis,
and problem-solving) significantly higher than a bottom cluster (contemporary issues, ex-
periments, and understanding the impact of one’s work). Competencies in the intermedi-
ate cluster - “math, science, and engineering skills,” ethics, life-long learning, design, and
engineering tools - may be statistically tied to competencies in either the top or bottom
cluster, depending on work environment or academic discipline. The clusters were stable
over time, that is over seven survey administrations, eighteen graduation years, and over the
period of zero to ten years after graduation. The descending sequence of importance rat-
ings changed significantly with wording changes on the survey, which is consistent with
findings in other studies. However, the clusters remained consistent across wording
changes. In short, the clusters capture the underlying pattern.
Naturally, there were variations from the aggregate pattern for each of the 133 sub-
groups; however, the only groups that differ from the aggregate pattern with statistical sig-
nificance are based solely on work environment and academic major, not on variables that
are demographic or time-related. This finding was consistent with theory (Holland, 1997;
Spencer & Spencer, 1993; Stark, Lowther, & Hagerty, 1987) and empirical work among
engineers (ASME, 1995; Bankel et al., 2003; Evans et al., 1993; Saunders-Smits, 2005,
2007; Shea, 1997). The engineering graduates in this study all rated the top cluster compe-
tencies of teamwork, communication, data analysis, and problem solving as most impor-
tant. Only physicians and those working in the healthcare industry rated problem solving
as slightly lower than the top cluster. This finding has implications for curriculum design:
graduates of all engineering disciplines share the view that these top cluster competencies are highly
important in their professional experience, regardless of their work environment. Similarly, the
bottom cluster competencies of contemporary issues, experiments, and impact were rated
with lowest importance across all engineering majors except for materials engineering,
where experiments was rated as top-cluster importance. Researchers in any field and those
whose work involves product testing also valued experiments highly. “Contemporary is-
sues” was rated as top-cluster by computer science majors. In short, the importance-level
clusters are an excellent first approximation of the importance of the ABET competencies
for professional work for engineering graduates of any major.

IMPLICATIONS

The major limitation of this study is potential coverage bias due to a sample from a sin-
gle institution. A second limitation is sampling only recent graduates—those within
10 years after graduation. A sample including graduates of all ages would, in essence, bring
employers’ perspectives into the sample. A third limitation is using only one wording for
nine of the competencies and only two wordings for three of the competencies. Future re-
search should sample multiple institutions, include engineering graduates of all ages, and
use multiple wordings of the competencies. This would address the limitations of this
study and help answer the question, “Can the results of this study be generalized to other

111
Journal of Engineering Education 101 (January 2012) 1

institutions?” Even though this sample included undergraduate programs in 11 engineer-


ing disciplines over eighteen graduation years, concern remains that aspects of the single
institution, such as curricular emphasis during the period, might have biased respondents’
perceptions of which competencies are important. In fact, concern about generalizability
led the author to conduct a meta-analysis of related studies. Preliminary results of the
meta-analysis (Passow, 2008) are remarkably similar to the results of this study; the few
differences between the meta-analysis and this study will be explored in a manuscript soon
to be submitted by the author.
Despite the limitations of this study, these findings alone call for action. The dramatic
difference in ratings between data analysis in the top-cluster and design of experiments
which is generally in the bottom cluster should be of note to ABET. ABET’s Engineering
Criterion 3 Program Outcomes “b” lumps the two competencies together in a way that can
allow data analysis to be lost: “an ability to design and conduct experiments, as well as to
analyze and interpret data.” In light of these results, ABET should consider formally split-
ting the two into separate program outcomes as they have done on some of their internal
documents. This change would ensure that data analysis gets attention in the curriculum
equal to its top importance to engineering graduates across engineering disciplines.
The research agenda set forth by the National Engineering Education Research Col-
loquies includes engineering epistemologies: “the profession needs research that… char-
acterize[s] the…engineering knowledge…[that is] essential for identifying and solving
technical problems within dynamic and multidisciplinary environments” (The Steering
Committee of the National Engineering Education Research Colloquies, 2006, p. 260).
This research area is the focus of this study. This study shows that according to recent en-
gineering graduates of any major, all the ABET competencies are deemed important for
professional work and that the importance-level clusters are an excellent first approxima-
tion of the importance of the ABET competencies for professional work. This study
shows that the top-cluster competencies (teamwork, communication, data analysis, and
problem solving) are universally deemed most important by engineering graduates of all
11 engineering majors. Therefore, when faculty work with advisory boards and employers
on the design or development of an engineering curriculum, they should consider placing
special emphasis on the “top cluster” competencies. Special emphasis could be added by
setting students’ technical work in the context of decision-making situations such as those en-
countered in professional life. Such a design would address multiple competencies simul-
taneously, including ethics, contemporary issues, and understanding the impact of one’s
work. Such action would address a problem observed by Karl Smith: “There’s a pretty big
gap between what engineers do in practice and what we think we’re preparing them for”
(Lord, 2010, p. 45).
Engineering faculty may groan inwardly at the notion of adding anything to their jam-
packed curriculum. However, engineering educators should feel comforted that other
professions, such as medicine, business, accounting, teacher education, and nursing, share
the struggle to create curricula that develop students’ technical and professional compe-
tencies beyond the traditional body of technical knowledge (Carraccio, Wolfsthal, Eng-
lander, Ferentz, & Martin, 2002; Jones, 2002; The Economist, 2007). The curricula in
these other professions are remarkably different on the surface. Yet, when examined
through the lens of a designer, a strong common principle emerges. Successful efforts do
not simply “tack on” additional competencies. Successful curricula are redesigns, begin-
ning with the competencies as new specifications—an approach encouraged by leaders
in engineering education (e.g., Lord, 2010; Sheppard et al., 2008). Here is a technical

112
101 (January 2012) 1 Journal of Engineering Education

analogy. The successful designs don’t simply cobble together separate functions, such as
adding a fax machine and a stand-alone scanner to a computer system. The successful
curricula start from scratch and combine the competencies in a more efficient package,
like a combination printer-fax-scanner does. Across the professions, curricula that develop
and integrate the technical and professional competencies are built on a central design principle:
embed the technical content in the context of professional practice. This simple idea is transfor-
mational. It leads naturally to learning the engineering competencies in the context of
applications, where decision-making, ethics, contemporary issues, and communications
abound. In summary, creating curricula that help students develop and integrate the
technical and professional competencies will require that we embed the content in the
context of professional practice. Accomplishing this will require design, a competency
that many engineering faculty have in abundance.

REFERENCES

AAEE Environmental Engineering Body of Knowledge Working Group. (2008). Environ-


mental engineering body of knowledge: Summary report. Environmental Engineer: Applied
Research and Practice, 6(Summer 2008), 21–33.
Abu-Eisheh, S. A. (2004). Assessment of the output of local engineering education programs
in meeting the needs of the private sector for economies in transition: The Palestinian terri-
tories case. International Journal of Engineering Education, 20(6), 1042–1054.
Alkhairy, A., Blank, L., Boning, D., Cardwell, D., Carter, W. C., Collings, N., . . . Stronge, B.
(2009, June). Comparison of international learning outcomes and development of engineering cur-
ricula. Paper presented at the ASEE Annual Conference and Exposition, Austin, TX.
American Society for Engineering Education. (1994). Engineering education for a changing world:
A joint project by the engineering deans council and corporate roundtable of the American Society for
Engineering Education. Washington, DC: American Society for Engineering Education.
American Society of Civil Engineers. (2007). Civil engineering body of knowledge for the 21st cen-
tury: Preparing the civil engineer for the future (Second Edition, Draft 10): American Society of
Civil Engineers.
Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E.,
Pintrich, P. R., . . . Wittrock, M. (2001). A taxonomy for learning, teaching, and assessing: A re-
vision of Bloom’s Taxonomy of Educational Objectives. New York, NY: Longman.
ASME. (1995). Mechanical engineering curriculum development initiative: Integrating the product
realization process (PRP) into the undergraduate curriculum. New York, NY: American Society
of Mechanical Engineers.
Assouline, M., & Meir, E. I. (1987). Meta-analysis of the relationship between congruence and
well-being measures. Journal of Vocational Behavior, 31(3), 319–332.
Bankel, J., Berggren, K. F., Blom, K., Crawley, E. F., Wiklund, I., & Ostlund, S. (2003). The
CDIO syllabus: A comparative study of expected student proficiency. European Journal of
Engineering Education, 28(3), 297–317.
Bemis, S. E., Belenky, A. H., & Soder, D. A. (1983). Job analysis. Washington, DC: Bureau of
National Affairs.

113
Journal of Engineering Education 101 (January 2012) 1

Benefield, L. D., Trentham, L. L., Khodadadi, K., & Walker, W. F. (1997). Quality improve-
ment in a college of engineering instructional program. Journal of Engineering Education,
86(1), 57–64.
Brumm, T. J., Hanneman, L. F., & Mickelson, S. K. (2005). The data are in: Student workplace
competencies in the experiential workplace. Paper presented at the 2005 American Society for
Engineering Education Annual Conference and Exposition, Portland, OR.
Burtner, J., & Barnett, S. (2003). A preliminary analysis of employer ratings of engineering co-op
students. Paper presented at the American Society for Engineering Education Southeastern
Regional Conference, Industrial Division.
Carraccio, C., Wolfsthal, S. D., Englander, R., Ferentz, K., & Martin, C. (2002). Shifting par-
adigms: From Flexner to competencies. Academic Medicine, 77(5), 361–367.
Continuing Professional Education Development Project (University of Pennsylvania). (1981).
Continuing Professional Education Development Project News (April 1981).
Crawley, E. F., Malmqvist, J., Ostlund, S., & Brodeur, D. R. (2007). Rethinking engineering ed-
ucation: The CDIO approach. New York, NY: Springer.
Cupp, S., Moore, P. D., & Fortenberry, N. L. (2004). Linking student learning outcomes to in-
structional practices-Phase I. Paper presented at the American Society for Engineering Edu-
cation Annual Conference and Exposition, Salt Lake City, UT.
Curry, L., & Wergin, J. F. (1997). Professional education. In J. G. Gaff & J. L. Ratcliff (Eds.),
Handbook of the undergraduate curriculum: A comprehensive guide to purposes, structures, prac-
tices, and change. San Francisco, CA: Jossey-Bass.
Daniel, W. W. (1990). Applied nonparametric statistics. Boston, MA: PWS-Kent Publishing
Co.
Davis, D. C., Beyerlein, S. W., & Davis, I. T. (2006). Deriving design course learning out-
comes from a professional profile. International Journal of Engineering Education, 22(3),
439–446.
De Graaff, E., & Ravesteijn, W. (2001). Training complete engineers: Global enterprise and
engineering education. European Journal of Engineering Education, 26(4), 419–427.
de Jager, H. G., & Nieuwenhuis, F. J. (2002). The relation between the critical cross-field outcomes
and the professional skills required by South African Technikon engineering graduates. Paper pre-
sented at the The 3rd South African Conference on Engineering Education, Durban,
South Africa.
Devore, J. L. (1995). Probability and statistics for engineering and the sciences. Pacific Grove, CA:
Duxbury Press.
Dey, E. L. (1997). Working with low survey response rates: The efficacy of weighting adjust-
ments. Research in Higher Education, 38(2), 215–226.
Doherty, A., Chenevert, J., Miller, R. R., Roth, J. L., & Truchan, L. C. (1997). Developing in-
tellectual skills. In J. G. Gaff & J. L. Ratcliff (Eds.), Handbook of the undergraduate curricu-
lum: A comprehensive guide to purposes, structures, practices, and change (pp. 170–189). San
Francisco, CA: Jossey-Bass.
Donahue, W. E. (1997). A descriptive analysis of the perceived importance of leadership com-
petencies to practicing engineers in central Pennsylvania (Doctoral dissertation, The Penn-
sylvania State University, 1996). Dissertation Abstracts International, A57(08), 3358.

114
101 (January 2012) 1 Journal of Engineering Education

Evans, D. L., Beakley, G. C., Crouch, P. E., & Yamaguchi, G. T. (1993). Attributes of engi-
neering graduates and their impact on curriculum design. Journal of Engineering Education,
82(4), 203–211.
Friedman, M. (1937). The use of ranks to avoid the assumption of normality implicit in the
analysis of variance. Journal of the American Statistical Association, 32(200), 675–701.
Friedman, M. (1940). A comparison of alternative tests of significance for the problem of m
rankings. Annals of Mathematics and Statistics, 11(1), 86–92.
Ghorpade, J. (1988). Job analysis: A handbook for the human resource director. Englewood Cliffs,
NJ: Prentice Hall.
Gibbons, M. T. (2009). Engineering by the numbers. In Engineering college profiles and statistics
book. Washington, D.C.: American Society for Engineering Education. Retrieved from
http://www.asee.org/papers-and-publications/publications/college-profiles/2009-profile-
engineering-statistics.pdf
Heywood, J. (2005). Engineering education: Research and development in curriculum and instruc-
tion. Hoboken, NJ: Wiley-IEEE.
Hogg, R. V., & Ledolter, J. (1987). Engineering statistics. New York, NY: Macmillan.
Holland, J. L. (1997). Making vocational choices: A theory of vocational personalities and work envi-
ronments. 3rd edition. Lutz, FL: Psychological Assessment Resources.
Hollander, M., & Wolfe, D. A. (1973). Nonparametric statistical methods. Hoboken, NJ: Wiley-
Interscience.
Houle, C. O. (1980). Continuing learning in the professions. San Francisco, CA: Jossey-Bass.
Hsu, J. C. (1996). Multiple comparisons: Theory and methods. Boca Raton, FL: Chapman &
Hall/CRC Press.
Hutcheson, P. A. (1997). Structures and practices. In J. G. Gaff & J. L. Ratcliff (Eds.), Hand-
book of the undergraduate curriculum: A comprehensive guide to purposes, structures, practices, and
change (pp. 100–117). San Francisco, CA: Jossey-Bass.
Jones, E. A. (2002). Curriculum reform in the professions: Preparing students for a changing
world (ED470541). ERIC Digest, ERIC Clearinghouse on Higher Education, EDO-HE-
2002–08.
Kannankutty, N., & Wilkinson, K. (1999). SESTAT: A tool for studying scientists and engineers in
the United States (NSF 99–337). Arlington, VA: National Science Foundation, Division of
Science Resources Studies.
Katz, S. M. (1993). The entry-level engineer: Problems in transition from student to profes-
sional. Journal of Engineering Education, 82(3), 171–174.
Kemp, N. D. (1999). The identification of the most important non-technical skills required by
entry-level engineering students when they assume employment. South African Journal of
Higher Education/SATHO, 13(1), 178–186.
Koen, P. A., & Kohli, P. (1998). ABET 2000: What are the most important criteria to the su-
pervisors of new engineering undergraduates? Proceedings of the 1998 ASEE Annual Confer-
ence and Exposition, Seattle, WA.
Lang, J. D., Cruse, S., McVey, F. D., & McMasters, J. H. (1999). Industry expectations of new
engineers: A survey to assist curriculum designers. Journal of Engineering Education, 88(1),
43–51.

115
Journal of Engineering Education 101 (January 2012) 1

Larson, C. W. (1983). Trends in the regulation of the professions. In K. E. Young, C. M.


Chambers & H. R. Kells (Eds.), Understanding accreditation: Contemporary perspectives on issues
and practices in evaluating educational quality (pp. 317–341). San Francisco, CA: Jossey-Bass.
Lattuca, L. R., Terenzini, P. T., Harper, B. J., & Yin, A. C. (2010). Academic environments in
detail: Holland’s theory at the subdicipline level. Research in Higher Education, 51(1), 21–39.
Lattuca, L. R., Terenzini, P. T., & Volkwein, J. F. (2006). Engineering change: A study of the im-
pact of EC2000, Final Report. Baltimore, MD: ABET.
Lord, M. (2010). Not what students need. Prism, January 2010, 44–46.
Lucena, J. C. (2006). Globalization and organizational change: Engineers’ experiences and their
implications for engineering education. Journal of Engineering Education, 31(3), 321–338.
Martin, R., Maytham, B., Case, J., & Fraser, D. (2005). Engineering graduates’ perceptions of
how well they were prepared for work in industry. European Journal of Engineering Educa-
tion, 30(2), 167–180.
McMasters, J. H., & Komerath, N. (2005). Boeing-university relations - A review and
prospects for the future. Proceedings of the 2005 American Society for Engineering Education
Annual Conference and Exposition, Portland, OR.
Meier, R. L., Williams, M. R., & Humphreys, M. A. (2000). Refocusing our efforts: assessing
non-technical competency gaps. Journal of Engineering Education, 89(3), 377–394.
Mentkowski, M., & Associates. (2000). Learning that lasts: Integrating learning development,
and performance in college and beyond. San Francisco, CA: Jossey-Bass.
Merino, D. (2006, June). A proposed engineering management body of knowledge (EMBOK).
Paper presented at the 2006 ASEE Annual Conference and Exposition, Chicago, IL.
Mickelson, S. K. (2001, June). Development of workplace competencies sufficient to measure ABET
outcomes. Paper presented at the 2001 ASEE Annual Conference and Exposition, Albu-
querque, NM.
Mickelson, S. K. (2002, June). Validation of workplace competencies sufficient to measure
ABET outcomes. Paper presented at the 2002 ASEE Annual Conference and Exposition,
Montreal, Quebec, Canada.
Miller, R. G. (1981). Simultaneous statistical inference. 2nd edition. New York, NY: Springer-
Verlag.
Miller, R. G. (1986). Beyond ANOVA, basics of applied statistics. New York, NY: John Wiley &
Sons.
National Academy of Engineering. (2004). The engineer of 2020: Visions of engineering in the new
century. Washington, DC: The National Academies Press.
National Research Council. (1995). Engineering education: Designing an adaptive system.
Washington, DC: National Academies Press.
National Society of Professional Engineers (NSPE). (1992). Engineering education issues: Report
on surveys of opinions by engineering deans and employers of engineering graduates on the first pro-
fessional degree. Alexandria, VA: NSPE Publication No. 3059.
Nemenyi, P. B. (1963). Distribution-free multiple comparisons (Doctoral Dissertation, Prince-
ton University, 1963). Dissertation Abstracts International, 25(2), 1233.
The Economist. (2007, May 10). New graduation skills. Retrieved from http://www.
economist.com/node/9149115

116
101 (January 2012) 1 Journal of Engineering Education

Nguyen, D. Q. (1998). The essential skills and attributes of an engineer: A comparative study of
academics, industry personnel and engineering students. Global Journal of Engineering Edu-
cation, 2(1), 65–75.
Office of the Professions. (2000). Report on continuing competence to the Board of Regents of the
State of New York. Retrieved from http://www.op.nysed.gov/reports/continuing_compe-
tence.pdf
Oude Voshaar, J. H. (1980). (k-1)- Mean significance levels of nonparametric multiple com-
parison procedures. The Annals of Statistics, 8(1), 75–86.
Passow, H. J. (2008). What competencies should undergraduate engineering programs emphasize? A
dilemma of curricular design that practitioners’ opinions can inform (Doctoral dissertation). Uni-
versity of Michigan, Ann Arbor, MI.
Pottinger, P. S., & Goldsmith, J. E. (1979). New directions for experiential learning: Defining and
measuring competence. San Francisco, CA: Jossey-Bass.
Prados, J. W., Peterson, G. D., & Lattuca, L. R. (2005). Quality assurance of engineering edu-
cation through accreditation: The impact of Engineering Criteria 2000 and its global influ-
ence. Journal of Engineering Education, 94(1), 165–184.
Sageev, P., & Romanowski, C. J. (2001). A message from recent engineering graduates in the
workplace: Results of a survey on technical communication skills. Journal of Engineering Ed-
ucation, 90(4), 685–693.
Sardana, H. K., & Arya, P. P. (2003). Training evaluation of engineering students: A case
study. International Journal of Engineering Education, 19(4), 639–645.
Saunders-Smit, G. N. (2008). Study of Delft aerospace alumni (Doctoral dissertaion). Technische
Universiteit Delft, Delft, Netherlands.
Saunders-Smits, G. N. (2005). The secret of their success: What factors determine the career success of
an aerospace engineer trained in the Netherlands? Paper presented at the 2005 ASEE Annual
Conference and Exposition, Portland, OR.
Saunders-Smits, G. N. (2007, July). Looking back: The needs of an aerospace engineer in their
degree course with the benefit of hindsight. Proceedings of the SEFI and IGIP Joint Annual
Conference, Miskolc, Hungary.
Scott, G., & Yates, K. W. (2002). Using successful graduates to improve the quality of undergrad-
uate engineering programmes. European Journal of Engineering Education, 27(4), 363–378.
Shea, J. E. (1997). An integrated approach to engineering curricula improvement with multi-
objective decision modelling and linear programming (Doctoral dissertation, Oregon State
University). Dissertation Abstracts International, A58(5), 1649.
Sheppard, S. D., Macatangay, K., Colby, A., & Sullivan, W. M. (2008). Educating engi-
neers: Designing for the future of the field—book highlights. Retrieved from http://www.
carnegiefoundation.org/sites/default/files/publications/elibrary_pdf_769.pdf
Smart, J., C., Feldman, K. A., & Ethington, C. A. (2000). Academic disciplines: Holland’s theory
and the study of college students and faculty. Nashville, TN: Vanderbilt University Press.
Spencer, L. M., McClelland, D. C., & Spencer, S. M. (1994). Competency assessment methods:
History and state of the art. Boston: Hay McBer Research Press.
Spencer, L. M., & Spencer, S. M. (1993). Competence at work: Models for superior performance.
New York, NY: Wiley.

117
Journal of Engineering Education 101 (January 2012) 1

Spiegel, M., R. (1990). Schaum’s outline of theory and problems of statistics. 2nd edition. New
York, NY: McGraw-Hill.
Stark, J. S., Lowther, M. A., & Hagerty, B. M. K. (1986). Responsive professional education: Bal-
ancing outcomes and opportunities (ASHE-ERIC Higher Education Report No. 3). Washing-
ton, DC: Association for the Study of Higher Education (ED 273 229).
Stark, J. S., Lowther, M. A., & Hagerty, B. M. K. (1987). Faculty perceptions of professional
preparation environments: Testing a conceptual framework. The Journal of Higher Educa-
tion, 58(5), 530–561.
The Steering Committee of the National Engineering Education Research Colloquies. (2006).
The research agenda for the new discipline of engineering education. Journal of Engineering
Education, 95(4), 259–261.
Trumbo, B. E. (2002). Learning statistics with real data. Pacific Grove, CA: Duxbury: Thomson
Learning.
Turley, R. T. (1992). Essential competencies of exceptional professional software engineers
(Doctoral dissertation, Colorado State University, 1991). Dissertation Abstracts International,
B53(01), 400.
Ulrich, K. T., & Eppinger, S. D. (1995). Product design and development. New York, NY:
McGraw-Hill.
Wagner, S. F. (1992). HarperCollins college outline: Introduction to statistics. New York, NY:
HarperPerennial.
Whetzel, D. L., Steighner, L. A., & Patsfall, M. R. (2000). Modeling leadership competencies
at the U.S. Postal Service. Management Development Forum, 1(2).
Woollacott, L. (2009). Taxonomies of engineering competencies and quality assurance in engi-
neering education. In A. S. Patil & P. J. Gray (Eds.), Engineering education quality assurance:
A global perspective (pp. 257–295). London, UK: Springer.
World Chemical Engineering Council. (2004). How does chemical engineering education meet the
requirements of employment? World Chemical Council Secretariat. Retrieved from
http://www.dechema.de/chemengworld_media/Downloads/short_report.pdf
Zar, J. H. (1999). Biostatistical analysis. 4th edition. Upper Saddle River, NJ: Prentice Hall.

AUTHOR

Honor J. Passow, Ph.D., P.E. is an instructor and curriculum designer at The Dart-
mouth Institute for Health Policy and Clinical Practice, Dartmouth College, 35 Centerra
Parkway, Lebanon, New Hampshire, 03766; honor.passow@dartmouth.edu.

118

You might also like