Professional Documents
Culture Documents
BACKGROUND
ABET-accredited engineering programs must help students develop specific outcomes (i.e., compe-
tencies). Faculty must determine the relative emphasis among the competencies. Yet, information is
sparse about the relative importance of each competency for professional practice.
PURPOSE (HYPOTHESIS)
This study synthesizes opinions of engineering graduates about which competencies are important for pro-
fessional practice.
DESIGN/METHOD
A survey asked undergraduate alumni of a large public university in the Midwest to rate the importance of
the ABET competencies in their professional experience. Responses included descriptions of education,
post-graduate work environment, and demographics. Protected, post-hoc, all-pairwise multiple compar-
isons determined patterns in the importance ratings, for the aggregate, and for descriptive subgroups.
RESULTS
The lowest-rated competency’s mean rating was 3.3 out of 5. Graduates of 11 engineering majors rated a
top cluster of competencies (teamwork, communication, data analysis, and problem solving) significantly
higher than a bottom cluster (contemporary issues, design of experiments, and understanding the impact
of one’s work). Importance ratings of five other competencies fell in an intermediate cluster in which
importance was statistically tied to either the top or bottom cluster, depending on work environment or
academic discipline-not demographics. The clusters were stable over time, that is, over seven survey
administrations (1999–2005), years since graduation (0, 2, 6 & 10), and graduation year (1989–2003).
CONCLUSIONS
Graduates across engineering disciplines share a pattern of importance for professional practice among the
ABET competencies that is statistically significant, consistent across demographic variables, and stable
over time. This pattern can inform faculty decisions about curriculum emphasis within and across engi-
neering disciplines.
KEYWORDS
competencies, engineering, professional practice
INTRODUCTION
95
Journal of Engineering Education 101 (January 2012) 1
for what knowledge and skills faculty need” (Doherty, Chenevert, Miller, Roth, &
Truchan, 1997, p. 182). Under the EC2000 paradigm, faculty need the ability to envision,
collectively articulate, and prioritize the competencies that students should gain from the
educational program before they graduate in order to prepare for a myriad of career paths
(Passow, 2008). Despite the need, information to support faculty decisions about relative
emphasis among competencies is sparse.
The relative emphasis in the curriculum should be informed by the relative importance
of the competencies required for professional practice. According to the National Engi-
neering Education Research Colloquies: “Students and employers alike expect a high de-
gree of synergy between what is learned in [the] classroom and what is needed in the field
for successful practice” (The Steering Committee of the National Engineering Education
Research Colloquies, 2006). With the worldwide move toward outcomes-based quality as-
surance in engineering education, a body of literature has established lists of competencies
that are important for engineering practice (e.g., American Society for Engineering Educa-
tion, 1994; Crawley, Malmqvist, Ostlund, & Brodeur, 2007; Cupp, Moore, & Fortenberry,
2004; De Graaff & Ravesteijn, 2001; McMasters & Komerath, 2005; Mickelson, 2001,
2002; National Research Council, 1995), including two reviews of published lists
(Alkhairy et al., 2009; Woollacott, 2009) and the Engineer 2020 Report (National Acade-
my of Engineering, 2004). A subset of these competencies has been adopted by accredita-
tion agencies, such as ABET (Prados, Peterson, & Lattuca, 2005). Other lists of compe-
tencies have been developed for specific engineering fields, such as civil engineering
(American Society of Civil Engineers, 2007), environmental engineering (AAEE Envi-
ronmental Engineering Body of Knowledge Working Group, 2008), design engineering
(Davis, Beyerlein, & Davis, 2006), and engineering management (Merino, 2006). How-
ever, for faculty in a program to design a curriculum that “support[s] the integration of
knowledge, skills…, and …values necessary for today’s professional practice” (Sheppard,
Macatangay, Colby, & Sullivan, 2008, p. 4), they need to know the relative importance
among these competencies for engineering practice. Simple lists of the competencies that prac-
ticing engineers believe are “required to satisfy the needs of the profession” (Prados et al.,
2005, pp. 177–178), do not include relative importance in practice.
Naturally, outcomes-based quality assurance has prompted many educational programs
to survey employers about gaps in their curriculum by asking: “Rate the adequacy of our
graduates’ performance on this competency” for each competency in a list. Many of these
results have been published (e.g. Abu-Eisheh, 2004; Brumm, Hanneman, & Mickelson,
2005; Burtner & Barnett, 2003; Katz, 1993; Lucena, 2006; Martin, Maytham, Case, &
Fraser, 2005; Sageev & Romanowski, 2001). A gap is, by definition, the relative difference
between a target for competent performance and an actual performance. A list of compe-
tency gaps, ordered from the largest gap to the smallest, can prioritize areas for improve-
ment for an existing program but cannot generalize to other programs. It is the relative im-
portance of the target competencies themselves—not the gaps—that is needed for
designing programs. According to product design faculty Ulrich and Eppinger (1995), a
list of “relative importance of the needs” (p. 48) is an essential precursor to establishing
specifications for a design. They assert that needs should be expressed in terms of “what the
product has to do” and should “use positive, not negative, phrasing.” A list of competency
gaps does not meet these requirements, while a list of relative importance of the target
competencies does.
A body of literature has focused directly on the relative importance of the competencies
required for successful engineering practice. These studies have used surveys that asked
96
101 (January 2012) 1 Journal of Engineering Education
respondents to rate (or rank) the importance of each competency in a list. Of the 19 “impor-
tance” studies published since 1990, thirteen included a balance of technical and profes-
sional competencies (ASME, 1995; Bankel et al., 2003; Benefield, Trentham, Khodadadi,
& Walker, 1997; Evans, Beakley, Crouch, & Yamaguchi, 1993; Koen & Kohli, 1998;
Lang, Cruse, McVey, & McMasters, 1999; Lattuca, Terenzini, & Volkwein, 2006;
National Society of Professional Engineers (NSPE), 1992; Nguyen, 1998; Saunders-Smit,
2008; Shea, 1997; Turley, 1992; World Chemical Engineering Council, 2004) while six
included only professional competencies (de Jager & Nieuwenhuis, 2002; Donahue, 1997;
Kemp, 1999; Meier, Williams, & Humphreys, 2000; Sardana & Arya, 2003; Scott &
Yates, 2002). A balanced inclusion of both technical and professional competencies is es-
sential to inform the design of entire curricula. Each of the 13 studies with balanced inclu-
sion of competencies reported their results as an ordered list of competencies from the
highest average importance to the lowest. Each study displayed and discussed the differ-
ences in average importance among the competencies; however, none of them tested the
differences among ratings of the various competencies for statistical significance. Of the 13
studies, 11 examined the difference in importance rating for various groups of respondents
for each competency, but only three of these studies tested group-to-group differences sta-
tistically. None of the identified studies addressed trends in importance ratings over time.
For faculty who are making decisions about curricular design, it is essential to have confi-
dence in the relative levels of importance of the various competencies. This study addresses
these limitations in prior work by addressing the following questions: Which ABET com-
petencies do engineering graduates rate as most and least important for professional prac-
tice? Are differences in importance ratings (1) statistically significant, (2) consistent across
groups of respondents, and (3) stable over time?
97
Journal of Engineering Education 101 (January 2012) 1
98
101 (January 2012) 1 Journal of Engineering Education
competencies and attitudes have been to you in your professional experience.” See
Table 1 for a list of the 12 ABET competencies. The principal aim of the analysis was
to determine the overall pattern of differences among the importance ratings of compe-
tencies for the sample as a whole. The secondary aim of the analysis was to compare
patterns of importance ratings for each subgroup to the pattern for the sample as a
whole. Sub-groups were defined using 17 survey questions: educational experience
(7 questions), work experience after graduation (8), gender (1), and race (1). The analy-
sis statistically tested the null hypothesis that there are no differences in the median im-
portance ratings for the various competencies.
TABLE 1
The Survey Question of Interest for the Study, Verbatim
99
Journal of Engineering Education 101 (January 2012) 1
100
101 (January 2012) 1 Journal of Engineering Education
TABLE 2
Sample Design and Sample Size for the CoE Alumni and Senior Populations
101
Journal of Engineering Education 101 (January 2012) 1
concern about coverage bias. Because theory and literature indicate that the overall pattern
of importance in competencies depends on the practice setting, occupations in the sample
were compared to occupations of engineering graduates in a nationally representative data
set. A 1999 survey of 24,716 U.S. residents under age 75 holding an engineering degree
(bachelor’s or higher) asked the question: “If you are employed or self-employed, which
category below BEST describes your job?” Respondents then identified their job code
from a two-page list. The National Science Foundation (NSF) weighted the survey re-
sponses to represent the estimated population of 2.3 million engineering graduates em-
ployed in the U.S. in 1999 (Kannankutty & Wilkinson, 1999). After obtaining the data
set from the NSF, we organized the occupations into categories (Figure 1) in order to
build a survey question about occupations for Midwestern U’s engineering alumni.
When our sample was collected, we compared the NSF national proportions of engi-
neering graduates in various occupations (Figure 1) with the occupation data from our
Midwestern U sample (Figure 2). There is striking similarity. The distributions are simi-
lar, except the Midwestern U data has a noticeably smaller portion of non-engineering sci-
ence and technology occupations. The close percentages for management are surprising
because the Midwestern U data is limited to respondents 10 years after graduation while
the NSF data includes engineering graduates up to 75 years old. Taken altogether, it ap-
pears that the Midwestern U sample is fairly representative of the U.S. population of engi-
neering graduates with respect to occupation, which reduces one aspect of coverage bias.
Yet, coverage bias may still limit generalizability: the recent alumni in our sample may un-
derestimate the importance of competencies that are essential later in a career, and the re-
spondents’ importance ratings may be biased by institution-specific values they adopted
during their undergraduate years.
Data Analysis
The principal aim of the analysis was to determine the overall pattern of differences
among the importance ratings of competencies for the sample as a whole. This is not a
common analytic goal, so the methods require explanation. One analytic question for the
main effect was: Taking every respondent to the revised wording of the survey (n 2115)
as a single group, do their importance ratings for the 12 competencies differ, and if so, what is the
pattern of differences among the competency ratings? For this question, the competencies are
analogous to 12 experimental treatments. Because each survey respondent rated all 12 com-
petencies, the ratings are not independent for each competency. Thus, in statistical termi-
nology pertaining to the design of experiments, each respondent is a block and each compe-
tency is a treatment, which makes this a two-way layout or two-way classification. The
two-way layout helps control for variations among raters (harsher and more generous
raters), and therefore substantially reduces the chance of failing to detect a difference when,
in truth, a difference exists (Type II error) (e.g., Spiegel, 1990; Trumbo, 2002). Also, be-
cause the respondents chose their own ratings, the ratings are random variables for each of
the 12 fixed competencies (e.g., Devore, 1995; Hogg & Ledolter, 1987). In addition, be-
cause there are 12 competencies (or treatments) to compare, a post-hoc multiple compari-
son procedure is recommended for minimizing false detections, in other words detecting a
difference when, in truth, no difference exists (Type I error) (e.g., Hsu, 1996; Miller, 1981;
Trumbo, 2002). Therefore, if the data were normally distributed, the two-step analytic ap-
proach would be to use analysis of variance (ANOVA) to find out if any differences exist
among the ratings of the competencies, and if so, perform Tukey’s multiple comparison test
102
101 (January 2012) 1 Journal of Engineering Education
to determine the pattern of differences in the importance ratings, that is, which importance
ratings differ with statistical significance.
However, histograms of the ratings for the various competencies show that the rat-
ings are not normally distributed. The respondents predominantly used the upper end of
the 5-point rating scale, so the ratings are highly skewed and show dramatic ceiling ef-
fects. A statistical consultant suggested a non-parametric approach (as opposed to data
transformations) to accommodate the skew while maintaining the inherent meaning of
the ratings. Consequently, because normal distributions cannot be assumed, only non-
parametric statistics were used (e.g., Daniel, 1990; Trumbo, 2002). Thus, statistically
speaking, the appropriate analysis is nonparametric, post-hoc multiple comparisons of
location for a mixed-effects, complete block, two-way layout. The Friedman’s rank sums
test is the “nonparametric analogue of the parametric two-way analysis of variance”
(Daniel, 1990, p. 262) most commonly used in practice (e.g., Hsu, 1996; Miller, 1986;
Zar, 1999). The multiple comparison test based on the Friedman rank sums proposed by
Nemenyi (1963) is widely recommended for non-parametric multiple comparisons for
complete blocks in a two-way layout (e.g., Daniel, 1990; Hollander & Wolfe, 1973;
Miller, 1981, 1986; Oude Voshaar, 1980; Zar, 1999).
103
Journal of Engineering Education 101 (January 2012) 1
59%
55%
Other
Engineer
science/ technology
Marketing/sales
Manager
Non?engineering
work
The distribution-free Friedman rank sum test was used to test the null hypothesis
that the population distributions for the treatments are the same (Wagner, 1992), or
more specifically that the medians of all the treatments are equal (Daniel, 1990;
Trumbo, 2002). For each respondent, the importance ratings of the competencies
were ranked from 1-12 (with rules for ties); these ranks are the data used in the test.
The technique, proposed by Friedman (1937, 1940), assumes that (1) the blocks (re-
spondents in this analysis) are mutually independent, (2) the variable of interest (im-
portance rating in this analysis) is continuous, (3) there are no interactions between
blocks and treatments (between respondents and competencies in this analysis), and
(4) the observations for each block (or respondent) may be ranked in order of magni-
tude (Daniel, 1990). Assumptions 1, 3, and 4 are satisfied. Assumption 2 is not met
because the 5-point rating scale is discrete, not continuous. However, there are no
nonparametric alternatives for non-continuous data, so we used the Friedman’s rank
sums test with the customary 5% chance of a Type I error ( .05) despite the limita-
tion introduced by violating Assumption 2.
Nemenyi’s multiple comparison tests the following null hypothesis: the distributions
are the same for specific pairs of treatments (competencies in this analysis). As with any
multiple comparison procedure, the critical values are chosen to limit the Type I error rate
for the entire analysis instead of for each individual comparison. In this analysis, the level of
significance (studywise .05) is split among the 66 comparisons (comparing each com-
petency with each of the other 11 competencies). For a group of respondents, the mean
rank for each competency was used in the large sample formula for Nemenyi’s test (Miller,
1981, equation 131, p. 174).
104
101 (January 2012) 1 Journal of Engineering Education
-
where is Ri the mean rank for competency i, k is the number of competencies in the analy-
sis, n is the number of respondents in the analysis, and qk, is the percentage points of the
Studentized range (Miller, 1981, Table B1, p. 234–237). The results of this test are dis-
played graphically as “tie-lines” on a graph, which tie together competencies whose impor-
tance ratings do not differ with statistical significance. In summary, if any significant differ-
ences exist according to the Friedman rank sums test ( .05), the Nemenyi multiple comparison
test (studywise .05) reveals the overall pattern of differences in the importance ratings of the
competencies (specifically, differences in individuals’ ranks of the competencies averaged across all
the individuals in the group).
The secondary aim of the analysis was to compare patterns of importance ratings for 133
sub-groups to the pattern for the sample as a whole—the aggregate pattern. The 133 sub-
groups were gender (2 groups; male and female), race (6), data collection year (7), graduation
year (15), alumni year (i.e., years since graduation) (3), mode of entry into engineering (e.g.,
freshman declaration, transfer student…) (4), single-major or double major (2), undergraduate
major (11), grade point average (4), satisfaction with undergraduate experience (5), satisfaction
with career services (5), number of additional degrees (3), additional degrees (8), number of
employers since graduation (7), employment status (4), employer’s business (18), type of job
(6), type of engineering job (13), professional registration status (2), and income (8).
For each of the 133 groups, the Friedman rank sum test and the Nemenyi multiple
comparison test were conducted to determine the pattern of differences in the importance
ratings for that group. The additional step in the subgroup analysis was to identify sub-
groups whose pattern of ratings differed from the aggregate pattern with statistical significance.
Graphing by sub-groups showed differences in rating patterns among sub-groups. On first
exploration, the sub-group data appeared to be a dizzying jumble of deviations from the
overall pattern of differences among ratings of competencies for the entire data set. How-
ever, a strong underlying pattern was evident when the competencies were grouped into
three clusters. A top cluster (teamwork, communication, data analysis, and problem solv-
ing) had top ratings in all but two of the 133 groups. A bottom cluster (contemporary is-
sues, experiments, and impact) had bottom ratings in all but a few groups. An intermediate
cluster (“math, science, and engineering knowledge”, ethics, lifelong learning, design, and
engineering tools) had ratings that ranged widely, depending on the subgroup. From these
observations, we developed a criterion for a difference from the aggregate pattern, based on
the “tie” lines for the sub-group: if any competencies from the top and bottom clusters were
“tied” according to the Nemenyi multiple comparison test, that group’s pattern was said to
differ from the aggregate pattern with statistical significance.
This study pursued the question: Which ABET competencies do engineering gradu-
ates rate as most and least important for professional practice? Data to answer this question
came from a survey of recent alumni of a large Midwestern University’s College of Engi-
neering, Respondents rated the importance of the ABET competencies in their profes-
sional experience.
105
Journal of Engineering Education 101 (January 2012) 1
5
Overall means (n = 2115 recent
engineering alumni, revised
wording)
4
Mean Importance Rating
Notes
Ratings
5 = "extremely important"
4 = "quite important"
3 3 = "somewhat important"
2 = "slightly important"
1 = "not at all important"
j) contemporary
c) design
k) engineering tools
whose ratings are not significantly
i) life-long learning
f) ethics
h) impact
e) problem solving
d) teams
b1) experiments
g) communication
issues
respondent's ratings were ranked
from 1 (highest rating) to 12 and
differences were tested between the
mean rank for each competency over
all respondents.)
FIGURE 3. Importance ratings for the ABET competencies, revised wording. The survey
question was: “Please rate how important the following competencies and attitudes have
been to you in your professional experience.” Verbatim competencies are in Table 1.
106
101 (January 2012) 1 Journal of Engineering Education
2.5
b2) data analysis
j) contemporary
c) design
k) engineering tools
i) life-long learning
f) ethics
h) impact
d) teams
e) problem solving
b1) experiments
g) communication
engineering
FIGURE 4. Importance ratings for the ABET competencies by undergraduate major, re-
vised wording (survey years 2002–2005). Graduates of 11 engineering majors valued a top
cluster of competencies consistently higher than a bottom cluster.
clusters and statistically significant exceptions appears in Table 3. The cluster independence
criterion held for all time-related variables (survey year, graduation year, and years since grad-
uation) and all demographic variables (gender and race). The criterion also held for all levels
of many other variables: all methods of entry into an engineering major (as a freshman or
transfer student), all undergraduate grade-point-average levels, all numbers of majors (single-
majors and double-majors), all levels of satisfaction with career services, all numbers of gradu-
ate degrees (from 0 to 5), all numbers of employers since graduation, all job classifications (en-
gineer, science/technology work, marketing, or manager), all income levels, and both types of
professional engineering status (registered and unregistered). The cluster independence crite-
rion held for most undergraduate majors (9 out of 11), most levels of satisfaction with the un-
dergraduate experience (4 out of 5), most additional degrees (5 out of 8), most employment
status categories (3 out of 4), most employers’ business categories (13 out of 18), most engi-
neering job types (10 out of 13), and most levels of satisfaction with career (4 out of 5).
Only 16 out of 133 subgroups (12%) had a pattern of importance ratings that dif-
fered significantly from the aggregate with respect to the cluster independence criterion
(studywise .05). All 16 of these differing groups can be categorized by practice set-
ting or undergraduate major, as was predicted by theory (Assouline & Meir, 1987;
Holland, 1997; Smart et al., 2000; Spencer et al., 1994; Stark et al., 1986) and prior
empirical work (ASME, 1995; Bankel et al., 2003; Evans et al., 1993; Saunders-Smits,
2005, 2007; Shea, 1997). Ten of the sixteen atypical subgroups rated “experiments”
equal with the top cluster. Competence in “designing and conducting experiments” is
prized at top-cluster importance among materials engineering majors, researchers in
any field (engineering faculty, engineering researchers, those holding a doctorate in en-
gineering, those who work at universities), and those involved with products that
require extensive testing (test engineers and those working in computer hardware,
107
Journal of Engineering Education 101 (January 2012) 1
TABLE 3
Engineering Graduates’ Ratings of Importance for ABET Competencies, Overall
Pattern, and Exceptions
medical devices, or communications). The other group that highly prized competence
with experiments was composed of those who were neither employed nor students,
which is actually the null option for work environment. Computer science majors rated
“contemporary issues” at a top-cluster importance level. Two medically-related groups,
those holding an M.D. degree and those working in the healthcare industry, rated
108
101 (January 2012) 1 Journal of Engineering Education
problem solving below the top-cluster and contemporary issues and impact at a
top-cluster level. Those holding a doctorate in a field outside of engineering rated “con-
temporary issues” and impact at a top-cluster importance level.
In summary, when there were differences from the aggregate pattern of importance rat-
ings (Table 3), they were associated exclusively with undergraduate major or work environ-
ment and not with time-related or demographic variables. This was predicted by Holland’s
(1997) theory of vocational behavior, the theory of “models for superior performance”
(Spencer et al., 1994), and Stark et al.’s (1986) framework of importance of competencies.
However, the differences from the aggregate pattern should not distract from the strong,
dominating aggregate pattern itself: the top-cluster competencies (teamwork, communi-
cation, data analysis, and problem solving) are universally deemed most important by engi-
neering graduates of all 11 engineering majors and 120 of the 122 other subgroups investi-
gated in this study. The bottom-cluster competencies (contemporary issues, experiments,
and impact) were rated as less important to professional practice than the top cluster com-
petencies by 117 out of 133 subgroups. The pattern of importance ratings among the com-
petencies is distinctive among the engineering disciplines primarily in the intermediate
cluster (“math, science, and engineering knowledge”, ethics, lifelong learning, design, and
engineering tools) which may be statistically tied with top-cluster or bottom-cluster com-
petencies, depending on the subgroup. Essentially, it is the intermediate cluster of compe-
tencies which “capture some of the subtleties of…variations in values” (Lattuca, Terenzini,
Harper, & Yin, 2010) among engineering disciplines and other subgroups, but graduates
of all engineering disciplines share a common value on the top-cluster competencies.
109
Journal of Engineering Education 101 (January 2012) 1
g) communication
oral communication
e) problem solving
i) life-long learning
c) design
k) engineering tools
3.5
j) contemporary issues
b1) experiments
h) impact
3.0
1999 2000 2001 2002 2003 2004 2005
(n=857) (n=700) (n=553) (n=484) (n=682) (n=530) (n=419)
Survey Year
Note. † The response options in the initial wording were 5 = "a lways necessary," 4 = "often useful," 3 = "useful," 2
= "rarely useful," 1 = "never used, needed." †† The response options in the revised wording were 5 = "extremely
important," 4 = "quite important," 3 = "somewhat important," 2 = "slightly important," 1 = "not at all important."
FIGURE 5. Importance ratings for the ABET competencies by survey year. The survey
question was: “Please rate how important the following competencies and attitudes have
been to you in your professional experience.” Competency descriptions are in Table 1. In
the legend, the competencies are listed in descending order of aggregated importance rat-
ings for the initial and revised wordings combined. Note the separation between the top
cluster competencies (teamwork, communication, data analysis, and problem solving) and
the bottom cluster competencies (contemporary issues, experiments, and impact).
options worded in terms of “importance.” Thus, some wording changes led to importance
ratings that changed relative importance level with statistical significance (studywise
.05), while others do not. This is consistent with findings in other studies. Shea’s (1997)
results show that relative importance ratings depend strongly on wording, while Bankel,
et al. (2003) reported minimal effects. Despite the wording changes, clusters of importance
ratings did not change.
CONCLUSIONS
Under EC2000, accredited engineering programs must help students develop specific
program outcomes (i.e., competencies), but the relative emphasis among these competen-
cies has been left for each program to determine. Leading engineering educators assert that
the relative emphasis in the curriculum should, among other considerations, be informed
by the relative importance of the competencies required for professional practice (e.g., Pra-
dos et al., 2005; The Steering Committee of the National Engineering Education Re-
search Colloquies, 2006). Yet, information about relative importance among competencies
is sparse. This study surveyed 4,225 recent engineering graduates of a single institution,
asking them to rate the importance of the ABET competencies in their professional
110
101 (January 2012) 1 Journal of Engineering Education
experience. Their responses were used to answer the question: Which ABET competen-
cies do engineering graduates rate as most and least important for professional practice?
Consistent with the findings in previous studies, mean ratings for all the ABET com-
petencies were above the middle of a five-point scale. The importance ratings for a top
cluster of competencies were statistically distinct from the ratings for a bottom cluster of
competencies with only a few exceptions (Table 3). With few exceptions, engineering
graduates value a top cluster of competencies (teamwork, communication, data analysis,
and problem-solving) significantly higher than a bottom cluster (contemporary issues, ex-
periments, and understanding the impact of one’s work). Competencies in the intermedi-
ate cluster - “math, science, and engineering skills,” ethics, life-long learning, design, and
engineering tools - may be statistically tied to competencies in either the top or bottom
cluster, depending on work environment or academic discipline. The clusters were stable
over time, that is over seven survey administrations, eighteen graduation years, and over the
period of zero to ten years after graduation. The descending sequence of importance rat-
ings changed significantly with wording changes on the survey, which is consistent with
findings in other studies. However, the clusters remained consistent across wording
changes. In short, the clusters capture the underlying pattern.
Naturally, there were variations from the aggregate pattern for each of the 133 sub-
groups; however, the only groups that differ from the aggregate pattern with statistical sig-
nificance are based solely on work environment and academic major, not on variables that
are demographic or time-related. This finding was consistent with theory (Holland, 1997;
Spencer & Spencer, 1993; Stark, Lowther, & Hagerty, 1987) and empirical work among
engineers (ASME, 1995; Bankel et al., 2003; Evans et al., 1993; Saunders-Smits, 2005,
2007; Shea, 1997). The engineering graduates in this study all rated the top cluster compe-
tencies of teamwork, communication, data analysis, and problem solving as most impor-
tant. Only physicians and those working in the healthcare industry rated problem solving
as slightly lower than the top cluster. This finding has implications for curriculum design:
graduates of all engineering disciplines share the view that these top cluster competencies are highly
important in their professional experience, regardless of their work environment. Similarly, the
bottom cluster competencies of contemporary issues, experiments, and impact were rated
with lowest importance across all engineering majors except for materials engineering,
where experiments was rated as top-cluster importance. Researchers in any field and those
whose work involves product testing also valued experiments highly. “Contemporary is-
sues” was rated as top-cluster by computer science majors. In short, the importance-level
clusters are an excellent first approximation of the importance of the ABET competencies
for professional work for engineering graduates of any major.
IMPLICATIONS
The major limitation of this study is potential coverage bias due to a sample from a sin-
gle institution. A second limitation is sampling only recent graduates—those within
10 years after graduation. A sample including graduates of all ages would, in essence, bring
employers’ perspectives into the sample. A third limitation is using only one wording for
nine of the competencies and only two wordings for three of the competencies. Future re-
search should sample multiple institutions, include engineering graduates of all ages, and
use multiple wordings of the competencies. This would address the limitations of this
study and help answer the question, “Can the results of this study be generalized to other
111
Journal of Engineering Education 101 (January 2012) 1
112
101 (January 2012) 1 Journal of Engineering Education
analogy. The successful designs don’t simply cobble together separate functions, such as
adding a fax machine and a stand-alone scanner to a computer system. The successful
curricula start from scratch and combine the competencies in a more efficient package,
like a combination printer-fax-scanner does. Across the professions, curricula that develop
and integrate the technical and professional competencies are built on a central design principle:
embed the technical content in the context of professional practice. This simple idea is transfor-
mational. It leads naturally to learning the engineering competencies in the context of
applications, where decision-making, ethics, contemporary issues, and communications
abound. In summary, creating curricula that help students develop and integrate the
technical and professional competencies will require that we embed the content in the
context of professional practice. Accomplishing this will require design, a competency
that many engineering faculty have in abundance.
REFERENCES
113
Journal of Engineering Education 101 (January 2012) 1
Benefield, L. D., Trentham, L. L., Khodadadi, K., & Walker, W. F. (1997). Quality improve-
ment in a college of engineering instructional program. Journal of Engineering Education,
86(1), 57–64.
Brumm, T. J., Hanneman, L. F., & Mickelson, S. K. (2005). The data are in: Student workplace
competencies in the experiential workplace. Paper presented at the 2005 American Society for
Engineering Education Annual Conference and Exposition, Portland, OR.
Burtner, J., & Barnett, S. (2003). A preliminary analysis of employer ratings of engineering co-op
students. Paper presented at the American Society for Engineering Education Southeastern
Regional Conference, Industrial Division.
Carraccio, C., Wolfsthal, S. D., Englander, R., Ferentz, K., & Martin, C. (2002). Shifting par-
adigms: From Flexner to competencies. Academic Medicine, 77(5), 361–367.
Continuing Professional Education Development Project (University of Pennsylvania). (1981).
Continuing Professional Education Development Project News (April 1981).
Crawley, E. F., Malmqvist, J., Ostlund, S., & Brodeur, D. R. (2007). Rethinking engineering ed-
ucation: The CDIO approach. New York, NY: Springer.
Cupp, S., Moore, P. D., & Fortenberry, N. L. (2004). Linking student learning outcomes to in-
structional practices-Phase I. Paper presented at the American Society for Engineering Edu-
cation Annual Conference and Exposition, Salt Lake City, UT.
Curry, L., & Wergin, J. F. (1997). Professional education. In J. G. Gaff & J. L. Ratcliff (Eds.),
Handbook of the undergraduate curriculum: A comprehensive guide to purposes, structures, prac-
tices, and change. San Francisco, CA: Jossey-Bass.
Daniel, W. W. (1990). Applied nonparametric statistics. Boston, MA: PWS-Kent Publishing
Co.
Davis, D. C., Beyerlein, S. W., & Davis, I. T. (2006). Deriving design course learning out-
comes from a professional profile. International Journal of Engineering Education, 22(3),
439–446.
De Graaff, E., & Ravesteijn, W. (2001). Training complete engineers: Global enterprise and
engineering education. European Journal of Engineering Education, 26(4), 419–427.
de Jager, H. G., & Nieuwenhuis, F. J. (2002). The relation between the critical cross-field outcomes
and the professional skills required by South African Technikon engineering graduates. Paper pre-
sented at the The 3rd South African Conference on Engineering Education, Durban,
South Africa.
Devore, J. L. (1995). Probability and statistics for engineering and the sciences. Pacific Grove, CA:
Duxbury Press.
Dey, E. L. (1997). Working with low survey response rates: The efficacy of weighting adjust-
ments. Research in Higher Education, 38(2), 215–226.
Doherty, A., Chenevert, J., Miller, R. R., Roth, J. L., & Truchan, L. C. (1997). Developing in-
tellectual skills. In J. G. Gaff & J. L. Ratcliff (Eds.), Handbook of the undergraduate curricu-
lum: A comprehensive guide to purposes, structures, practices, and change (pp. 170–189). San
Francisco, CA: Jossey-Bass.
Donahue, W. E. (1997). A descriptive analysis of the perceived importance of leadership com-
petencies to practicing engineers in central Pennsylvania (Doctoral dissertation, The Penn-
sylvania State University, 1996). Dissertation Abstracts International, A57(08), 3358.
114
101 (January 2012) 1 Journal of Engineering Education
Evans, D. L., Beakley, G. C., Crouch, P. E., & Yamaguchi, G. T. (1993). Attributes of engi-
neering graduates and their impact on curriculum design. Journal of Engineering Education,
82(4), 203–211.
Friedman, M. (1937). The use of ranks to avoid the assumption of normality implicit in the
analysis of variance. Journal of the American Statistical Association, 32(200), 675–701.
Friedman, M. (1940). A comparison of alternative tests of significance for the problem of m
rankings. Annals of Mathematics and Statistics, 11(1), 86–92.
Ghorpade, J. (1988). Job analysis: A handbook for the human resource director. Englewood Cliffs,
NJ: Prentice Hall.
Gibbons, M. T. (2009). Engineering by the numbers. In Engineering college profiles and statistics
book. Washington, D.C.: American Society for Engineering Education. Retrieved from
http://www.asee.org/papers-and-publications/publications/college-profiles/2009-profile-
engineering-statistics.pdf
Heywood, J. (2005). Engineering education: Research and development in curriculum and instruc-
tion. Hoboken, NJ: Wiley-IEEE.
Hogg, R. V., & Ledolter, J. (1987). Engineering statistics. New York, NY: Macmillan.
Holland, J. L. (1997). Making vocational choices: A theory of vocational personalities and work envi-
ronments. 3rd edition. Lutz, FL: Psychological Assessment Resources.
Hollander, M., & Wolfe, D. A. (1973). Nonparametric statistical methods. Hoboken, NJ: Wiley-
Interscience.
Houle, C. O. (1980). Continuing learning in the professions. San Francisco, CA: Jossey-Bass.
Hsu, J. C. (1996). Multiple comparisons: Theory and methods. Boca Raton, FL: Chapman &
Hall/CRC Press.
Hutcheson, P. A. (1997). Structures and practices. In J. G. Gaff & J. L. Ratcliff (Eds.), Hand-
book of the undergraduate curriculum: A comprehensive guide to purposes, structures, practices, and
change (pp. 100–117). San Francisco, CA: Jossey-Bass.
Jones, E. A. (2002). Curriculum reform in the professions: Preparing students for a changing
world (ED470541). ERIC Digest, ERIC Clearinghouse on Higher Education, EDO-HE-
2002–08.
Kannankutty, N., & Wilkinson, K. (1999). SESTAT: A tool for studying scientists and engineers in
the United States (NSF 99–337). Arlington, VA: National Science Foundation, Division of
Science Resources Studies.
Katz, S. M. (1993). The entry-level engineer: Problems in transition from student to profes-
sional. Journal of Engineering Education, 82(3), 171–174.
Kemp, N. D. (1999). The identification of the most important non-technical skills required by
entry-level engineering students when they assume employment. South African Journal of
Higher Education/SATHO, 13(1), 178–186.
Koen, P. A., & Kohli, P. (1998). ABET 2000: What are the most important criteria to the su-
pervisors of new engineering undergraduates? Proceedings of the 1998 ASEE Annual Confer-
ence and Exposition, Seattle, WA.
Lang, J. D., Cruse, S., McVey, F. D., & McMasters, J. H. (1999). Industry expectations of new
engineers: A survey to assist curriculum designers. Journal of Engineering Education, 88(1),
43–51.
115
Journal of Engineering Education 101 (January 2012) 1
116
101 (January 2012) 1 Journal of Engineering Education
Nguyen, D. Q. (1998). The essential skills and attributes of an engineer: A comparative study of
academics, industry personnel and engineering students. Global Journal of Engineering Edu-
cation, 2(1), 65–75.
Office of the Professions. (2000). Report on continuing competence to the Board of Regents of the
State of New York. Retrieved from http://www.op.nysed.gov/reports/continuing_compe-
tence.pdf
Oude Voshaar, J. H. (1980). (k-1)- Mean significance levels of nonparametric multiple com-
parison procedures. The Annals of Statistics, 8(1), 75–86.
Passow, H. J. (2008). What competencies should undergraduate engineering programs emphasize? A
dilemma of curricular design that practitioners’ opinions can inform (Doctoral dissertation). Uni-
versity of Michigan, Ann Arbor, MI.
Pottinger, P. S., & Goldsmith, J. E. (1979). New directions for experiential learning: Defining and
measuring competence. San Francisco, CA: Jossey-Bass.
Prados, J. W., Peterson, G. D., & Lattuca, L. R. (2005). Quality assurance of engineering edu-
cation through accreditation: The impact of Engineering Criteria 2000 and its global influ-
ence. Journal of Engineering Education, 94(1), 165–184.
Sageev, P., & Romanowski, C. J. (2001). A message from recent engineering graduates in the
workplace: Results of a survey on technical communication skills. Journal of Engineering Ed-
ucation, 90(4), 685–693.
Sardana, H. K., & Arya, P. P. (2003). Training evaluation of engineering students: A case
study. International Journal of Engineering Education, 19(4), 639–645.
Saunders-Smit, G. N. (2008). Study of Delft aerospace alumni (Doctoral dissertaion). Technische
Universiteit Delft, Delft, Netherlands.
Saunders-Smits, G. N. (2005). The secret of their success: What factors determine the career success of
an aerospace engineer trained in the Netherlands? Paper presented at the 2005 ASEE Annual
Conference and Exposition, Portland, OR.
Saunders-Smits, G. N. (2007, July). Looking back: The needs of an aerospace engineer in their
degree course with the benefit of hindsight. Proceedings of the SEFI and IGIP Joint Annual
Conference, Miskolc, Hungary.
Scott, G., & Yates, K. W. (2002). Using successful graduates to improve the quality of undergrad-
uate engineering programmes. European Journal of Engineering Education, 27(4), 363–378.
Shea, J. E. (1997). An integrated approach to engineering curricula improvement with multi-
objective decision modelling and linear programming (Doctoral dissertation, Oregon State
University). Dissertation Abstracts International, A58(5), 1649.
Sheppard, S. D., Macatangay, K., Colby, A., & Sullivan, W. M. (2008). Educating engi-
neers: Designing for the future of the field—book highlights. Retrieved from http://www.
carnegiefoundation.org/sites/default/files/publications/elibrary_pdf_769.pdf
Smart, J., C., Feldman, K. A., & Ethington, C. A. (2000). Academic disciplines: Holland’s theory
and the study of college students and faculty. Nashville, TN: Vanderbilt University Press.
Spencer, L. M., McClelland, D. C., & Spencer, S. M. (1994). Competency assessment methods:
History and state of the art. Boston: Hay McBer Research Press.
Spencer, L. M., & Spencer, S. M. (1993). Competence at work: Models for superior performance.
New York, NY: Wiley.
117
Journal of Engineering Education 101 (January 2012) 1
Spiegel, M., R. (1990). Schaum’s outline of theory and problems of statistics. 2nd edition. New
York, NY: McGraw-Hill.
Stark, J. S., Lowther, M. A., & Hagerty, B. M. K. (1986). Responsive professional education: Bal-
ancing outcomes and opportunities (ASHE-ERIC Higher Education Report No. 3). Washing-
ton, DC: Association for the Study of Higher Education (ED 273 229).
Stark, J. S., Lowther, M. A., & Hagerty, B. M. K. (1987). Faculty perceptions of professional
preparation environments: Testing a conceptual framework. The Journal of Higher Educa-
tion, 58(5), 530–561.
The Steering Committee of the National Engineering Education Research Colloquies. (2006).
The research agenda for the new discipline of engineering education. Journal of Engineering
Education, 95(4), 259–261.
Trumbo, B. E. (2002). Learning statistics with real data. Pacific Grove, CA: Duxbury: Thomson
Learning.
Turley, R. T. (1992). Essential competencies of exceptional professional software engineers
(Doctoral dissertation, Colorado State University, 1991). Dissertation Abstracts International,
B53(01), 400.
Ulrich, K. T., & Eppinger, S. D. (1995). Product design and development. New York, NY:
McGraw-Hill.
Wagner, S. F. (1992). HarperCollins college outline: Introduction to statistics. New York, NY:
HarperPerennial.
Whetzel, D. L., Steighner, L. A., & Patsfall, M. R. (2000). Modeling leadership competencies
at the U.S. Postal Service. Management Development Forum, 1(2).
Woollacott, L. (2009). Taxonomies of engineering competencies and quality assurance in engi-
neering education. In A. S. Patil & P. J. Gray (Eds.), Engineering education quality assurance:
A global perspective (pp. 257–295). London, UK: Springer.
World Chemical Engineering Council. (2004). How does chemical engineering education meet the
requirements of employment? World Chemical Council Secretariat. Retrieved from
http://www.dechema.de/chemengworld_media/Downloads/short_report.pdf
Zar, J. H. (1999). Biostatistical analysis. 4th edition. Upper Saddle River, NJ: Prentice Hall.
AUTHOR
Honor J. Passow, Ph.D., P.E. is an instructor and curriculum designer at The Dart-
mouth Institute for Health Policy and Clinical Practice, Dartmouth College, 35 Centerra
Parkway, Lebanon, New Hampshire, 03766; honor.passow@dartmouth.edu.
118