You are on page 1of 24

Using the

Fundamentals
of Engineering
(FE) Examination
as an Outcomes
Assessment Tool

National Council of Examiners for Engineering and Surveying | ncees.org


Using the
Fundamentals
of Engineering
(FE) Examination
as an Outcomes
Assessment Tool

B. Grant Crawford, Ph.D., P.E.


John W. Steadman, Ph.D., P.E., F.ASEE
David L. Whitman, Ph.D., P.E., F.ASEE
Rhonda K. Young, Ph.D., P.E.
About
NCEES
The National Council of Examiners for Engineering
and Sur veying is a nonprofit organization made up of
engineering and sur veying licensing boards from all U.S.
states, the District of Columbia, Guam, the Nor thern
Mariana Islands, Puer to Rico, and the U.S. Virgin Islands.
Since its founding in 1920, NCEES has been committed
to advancing licensure for engineers and sur veyors in
order to safeguard the health, safety, and welfare of the
American public.

NCEES helps its member licensing boards carr y out their


duties to regulate the professions of engineering and
sur veying. It develops best-practice models for state
licensure laws and regulations and promotes uniformity
among the states. It develops and administers the exams
used for engineering and sur veying licensure throughout
the countr y. It also provides ser vices to help licensed
engineers and sur veyors practice their professions in
other U.S. states and territories. For more information,
visit http://ncees.org.

© 2021 National Council of Examiners for Engineering and Surveying


Introduction
Institutions of higher education are increasingly being encouraged to evaluate
their academic programs with reference to a national norm or standard. This
pressure may come from state legislators who want to assign cost-benefit
labels and measure the effectiveness of higher education, or it may result
from accreditation requirements, which are progressively becoming driven by
accountability and benchmarking. Whatever the reason, institutions must find
practical, objective ways to assess their programs.

Assessment process
In engineering education, the ABET Engineering Criteria have required a
system of evaluation that includes program educational objectives, student
outcomes, an assessment process to collect data on student outcomes, and an
evaluation process that shows how this data is used to improve the program.
The evaluation process may, and usually does, contain both direct and indirect
measures. Direct measures allow the examination or observation of student
knowledge against a measurable norm. Indirect measures attempt to ascertain
the value of the learning experience through methods that do not involve actual
student performance related to a specific outcome. A disadvantage of indirect
measures is that assumptions must be made regarding the results of activities
such as exit interviews, focus groups, and questionnaires. Accordingly, direct
measures of student outcomes provide important advantages in program
assessment.

One effective tool for direct assessment of certain aspects of engineering


education is the NCEES Fundamentals of Engineering (FE) examination.
This exam, developed to measure minimum technical competence, is the first
step in the professional licensing of engineers. It is a pass-fail exam taken by
approximately 50,000 people each year, most of whom are college seniors or
recent graduates. For licensing, the examinee is interested only in whether he
or she passed or failed. For assessment purposes, however, the pass-fail rate is
of secondary importance, and the focus is instead on examinees’ performance
in a given subject.

Effective assessment of academic programs requires a set of tools and


processes to evaluate various aspects of education. If the tools are to have any
value as benchmarks or have credibility on some objective basis, they should
make it possible to compare the results over time. This is essential to the
continuous improvement process, since determining the effect of curriculum
or instructional changes on outcomes is critical. Assessment tools with
this comparative value are particularly difficult to obtain. Methods such as
evaluation of portfolios and student exit surveys lack uniformity.

1
FE examination
As the only nationally normed exam that addresses specific engineering topics, the FE exam
is an extremely attractive tool for outcomes assessment. In fact, since 1996, the FE exam has
been formatted for the express purpose of facilitating the assessment process. For example,
the FE Chemical, Civil, Electrical and Computer, Environmental, Industrial and Systems, and
Mechanical exams include topics from both lower- and upper-level courses that appear in
most institutions’ curricula. The FE Other Disciplines exam is also available for engineering
majors that are outside the discipline-specific exams mentioned above. The topics included
in the discipline-specific exams are determined via surveys that are sent to practitioners and
educators. Specifications are appropriately adjusted every 6–8 years to reflect current practices.
The FE exam specifications have been updated, are effective in July 2020, and are provided in
the appendix.
FE exam results can be used as one measurement in the assessment of the following student
outcomes included in ABET General Criterion 3: (1) an ability to identify, formulate, and
solve complex engineering problems by applying principles of engineering, science, and
mathematics; (2) an ability to apply engineering design to produce solutions that meet
specified needs with consideration of public health, safety, and welfare, as well as global,
cultural, social, environmental, and economic factors; and (4) an ability to recognize ethical
and professional responsiblities in engineering situations and make informed judgments,
which must consider the impact of engineering solutions in global, economic, environmental,
and societal context. The use of FE exam results in a particular outcome area is dependent on
the specific engineering discipline. The current exam specifications for that discipline should
be reviewed to determine the appropriateness of fit. While NCEES recognizes that most
questions on the FE exam do not represent “complex” engineering problems, the assessment
of this outcome could and should include the assessment of basic engineering problems. To
satisfy student outcome 1, programs must demonstrate that students can apply the principles
of engineering, science, and mathematics. The FE exam does this.
Although the FE exam does provide one method of assessment, employing the exam as an
assessment tool has both advantages and disadvantages; therefore, its widespread use in
outcomes assessment should be analyzed carefully. The exam should not, for example, be
used to determine the curricular content of any program. Its purpose is to test competency
for licensure; it is not intended to force programs to be similar. For licensure purposes, the
total score is evaluated rather than the score in any specific subset of questions. Passing the
exam does not denote competence in all subjects but instead shows an average minimum
competency across the exam as a whole.
One potential error in using the FE exam results as an assessment tool is focusing on the
percentage of students who pass the exam. This criterion is too broad to be effective in
improving instruction in specific topics; more specific measures are needed. Too often, the
passing rates of individual programs are compared with those of other institutions, and these
rates become more important than the subject matter evaluations. In such a situation, the
focus becomes “teaching to the exam” and not truly assessing how well students have learned
the subject matter in the curriculum.
Using the FE exam
as an assessment tool
To properly use the FE exam as an assessment tool, the department or program
faculty should first determine what topics they already emphasize in their program.
This is a major part of the program educational objectives and student outcomes
set by each program as required by ABET. After establishing the topics to be
emphasized, faculty should set specific goals for student performance and then use
the relevant portions of the FE exam to assess the students’ knowledge in specific
areas, such as water resources, electric circuits, or machine design. The faculty
should then compare their goals to the knowledge demonstrated by graduates of
the program. For this assessment process to be valid, the population taking the
exam must represent the entire population of graduates from the program. Many
institutions that currently use the FE exam as one of their assessment tools require
that all seniors take the exam and give a good-faith effort (but not necessarily pass).
Analysis of FE examinees over a number of test administrations has revealed that
very few students fail to take the exam seriously. However, getting students to
review materials before the exam, to prepare adequately, and ultimately to do their
best work are legitimate concerns. Often, students do not lack motivation but
instead struggle to make time for review in a crowded senior year (e.g., advanced
coursework, capstone design, outside work commitments). Faculty who have doubts
about whether students are putting forth their best efforts should take steps to
motivate them, such as informing them of the importance of the results to their
future or providing an incentive to pass the exam. Some programs that require
all students to take the exam but do not require a passing score for graduation
(the process recommended by the authors) offer an incentive to do well, such
as reimbursing a portion of the cost of the exam to students who pass and also
providing review sessions on topics pertinent to the exam.
Clearly, if the results are to be useful for outcomes assessment, the students must
perform in a way that accurately reflects their understanding. It should be noted
that when developing the Subject Matter Report (to be discussed later), NCEES
works with psychometricians to remove random guessers so that assessment is not
influenced by examinees who simply guess rather than attempt to correctly answer
the exam questions.
Additionally, students should carefully consider when to take the FE exam during
their senior year. For example, if they take it too soon, they may miss out on the
benefit of course material covered during their final semester. With the computer-
based format, students can schedule their appointment to take the FE exam up
to one year before the test date. This makes it possible for them to book an exam
appointment that accommodates completing particular coursework before taking
the exam.

3
FE exam topic coverage
To effectively use the FE exam as an assessment tool, faculty should know the
specifications for the exam as well as the level of understanding that the items are
meant to measure. Specifications for the various discipline-specific exams are provided
in the appendix. As mentioned earlier, periodic changes to the specifications are based
in large part on faculty and industry feedback to NCEES surveys. The goal is to ensure
that the exam aligns with topics that are important to and current with what is being
practiced in a specific engineering discipline.
In addition, assessments will be more meaningful if students take the FE exam
matching their engineering discipline, which addresses topics that are germane to that
discipline, rather than the FE Other Disciplines exam, which covers a broader range of
topics. Furthermore, NCEES exam data indicate that students typically perform better
on the discipline-specific exam most closely aligned to their major. For disciplines that
are not represented with a discipline-specific exam, the FE Other Disciplines exam will
provide information on topics that are relevant to most programs.

CBT format of the FE exam


The FE exam is administered via computer-based testing (CBT). Examinees register
for the exam through NCEES, obtaining approval to test from the appropriate state
licensing board as required, and take the exam at any approved Pearson VUE test center
at a time and day that is convenient to them. Examinees receive their results (pass or
fail) 7–10 days after taking the exam. These results are reported to the appropriate
licensing board so that the examinees can be considered for engineer intern status.
In January and July of each calendar year, NCEES produces and distributes detailed
Subject Matter Reports containing results by topic area for examinees from each
institution for the previous six-month period (January–June or July–December).

FE exam results
The NCEES Subject Matter Report shown as Table 1 summarizes data on examinees who
are from an EAC/ABET-accredited program who took one of the seven discipline-specific
FE exams within ±12 months of graduation. This is the statistical group that should be
used as a measure of program outcomes. Results are presented for examinees from a
particular program and for examinees from ABET-accredited programs who declared
the same major and who chose the same discipline-specific exam. As discussed later,
this allows the institution’s faculty the ability to compare their students’ performance
against all accredited examinees. The CBT form of the Subject Matter Report scales
the raw scores for each subject on a 0–15 scale rather than as percentage correct. This is
necessary because each CBT examinee will have different questions in a particular subject
and the difficulty of that set of questions will be different from any other examinee’s set
of questions. A statistical method is used to equilibrate each examinee’s set of questions
so that comparable averages (the institution’s and the accredited comparator) may be
obtained. Comparator standard deviations will also be computed on this 0–15 scale.
Table 1. Subject Matter Report for Institution X
NCEES Fundamentals of Engineering examimation
ABET-accredited programs

Examination: Fundamentals of Engineering (FE)


Report title: Subject Matter Report by Major and Examination
Exams administered: Jul 01–Dec 31, 20XX
Examinees included: First-Time Examinees from EAC/ABET-Accredited Engineering Programs
Graduation date: Examinees Testing within 12 months of Graduation Date

Name of Institution: Institution X

Major: Civil
FE Examination: Civil

Institution ABET
Comparator2 Uncertainty
Range for
No. Examinees Taking 31 1
2,499 Scaled
No. Examinees Passing 26 1,760 Score4

Percent Examinees Passing 84% 70% ± 0.18


Number Institution ABET ABET
of Exam Average Comparator Comparator Ratio Scaled
Questions Performance Average Standard Score4 Score4
Index3 Performance Deviation
Index

Mathematics and Statistics 8 9.8 9.8 2.7 1.00 0.00


Ethics and Professional Practice 4 10.4 10.1 3.5 1.03 0.09
Engineering Economics 5 10.2 9.9 3.7 1.03 0.08
Statics 8 12.3 11.1 3.8 1.11 0.32
Dynamics 4 10.7 10.1 3.6 1.06 0.17
Mechanics of Materials 7 10.7 9.5 2.8 1.14 0.43
Materials 5 10.9 10.3 3.6 1.06 0.17
Fluid Mechanics 6 9.7 9.7 2.5 1.00 0.00
Surveying 6 8.7 9.2 3.1 0.95 -0.16
Water Resources and
Environmental Engineering 10 10.5 10.9 3.4 0.96 -0.12
Structural Engineering 10 9.7 9.4 2.2 1.03 0.14
Geotechnical Engineering 10 9.5 9.4 2.1 1.01 0.05
Transportation Engineering 9 9.2 9.0 2.2 1.02 0.09
Construction Engineering 8 11.5 9.5 3.7 1.21 0.54

1. 0 examinees have been removed from this data because they were flagged as a random guesser.
2. Comparator includes all examinees from programs accredited by the ABET commission noted.
3. Performance index is based on a 0–15 scale.
4. These scores are made available for assessment purposes. See the NCEES publication entitled
Using the FE as an Outcomes Assessment Tool at https://ncees.org/engineering/educator-resources/.
TERMS AND CONDITIONS OF DATA USE
This report contains confidential and proprietary NCEES data. The report itself may not be provided to third parties or used for any purpose other than
that contemplated by NCEES and the recipient of this report. The information contained in this report however may be shared with accrediting bodies
so long as the report recipient expressly informs the accrediting body that the information is confidential and proprietary and may not be used for
any purpose unrelated to the accreditation review of the institution or program in question.
By using any of the information contained in this report the report recipient agrees to respect and be bound by these terms, conditions, and
limitations regarding the use of NCEES data. Your cooperation is appreciated.

5
Application
of FE exam results
Prior to using the exam for assessment purposes, faculty should determine the
expected performance in each topic area, depending on the emphasis of that topic in
their program. For example, if a civil engineering program places little emphasis on
surveying or transportation, students should be expected to perform accordingly.
Conversely, if the program has a strong emphasis on structural engineering, one
would expect a much higher performance in this area compared to the comparator
average. For more conclusive results, faculty should also consider longitudinal
performance over time rather than results from one Subject Matter Report. The
form of this expected performance will depend on the analysis method chosen, a
variety of which have been developed to examine the data from the Subject Matter
Report with regard to program assessment. The two methods described in this
paper are as follows:
Ratio method
Scaled score method

Ratio method
A simple and effective process for evaluating exam results is to compare the
institution’s results with comparator averages by topic in a simple ratio. For
this method, the ratio of the performance at Institution X to the comparator
performance is calculated for each topic area emphasized in Institution X’s
program. The faculty can develop appropriate expectations on this scale,
determining how much above or below the comparator average is acceptable for
their students. These expectation levels should be reasonable (remember that
comparisons are between same majors taking the same discipline-specific exam)
and, at the same time, should represent how the institution views its particular
strengths. An expectation of 1.0 is certainly reasonable for most topics, while
expectations of 1.05 to 1.10 might be reasonable on subjects that the institution
believes are its strengths.

Figure 1 shows the ratio scores for a particular Subject Matter Report. This type
of figure can provide a quick check on how the most recent students performed on
each topic within the exam. It also demonstrates why one should not use the pass
rate of the exam as an assessment tool. Note that the pass rate of civil engineering
students at ABET-accredited Institution X is above the comparator pass rate
for all civil engineering students from ABET-accredited institutions. However,
the students are performing below the faculty’s expectations on many of the
individual topic areas.

6
Figure 1. Ratio scores for a particular Subject Matter Report
FE Exam Ratio Score
Major: CVLE Exam: CVLE Goal
1.30
Ratio: Inst X to Comparator

1.20

1.10

1.00

0.90

0.80

0.70

0.60
s

ls

ls

cs

in d

ng

g
tic

ic

ic

ic

in
tic

in

in

in
er an
ia

ia

g
ni

ri
om

at

er
ey

er

er
er

er
is

ac

ee
ha

ne s
na
St

ne

ne

ne
at

gi ce
rv
at

at
on
Pr

n
ec
Dy

gi
St

Su
M

ur

gi

gi

gi
Ec

En
al

En

En

En
lE o
d

of
on

ta es
d
an

n
g

al
ui

al

n
in

cs
si

ener R

ur

tio

tio
s

ic
Fl
er
es

ni
ic

ct

hn
ne

ta

uc
ha
at

of

nm at

ru

ec

or
m

tr
gi

ec
Pr

ro W

St

ns
he

sp
En

ot
M
d

Ge

Co
an
an
at

vi
M

Tr
cs

En
hi
Et

# took (InstX): 31 % Pass (InstX): 84% % Pass (Natl): 70%

For assessment purposes, it is more informative to graph the performance on individual


topics over time. Figures 2 and 3 show such graphs for student performance in two areas
emphasized by Institution X.

Figure 2. Longitudinal study of Institution X’s performance in


Probability and Statistics

Subject: Mathematics and Statistics Inst X Results


Major: CVLE Exam: CVLE Expected Goal

1.20
Ratio: Inst X to Comparator

1.10

1.00

0.90

0.80

0.70

0.60
4
3
2

8
1

5
2

8
1

7
3
in

in

in
in

in

in

in

in
in

in
in
in
in
in
in

in
m

m
m

m
m

m
m
m
m
m
m

m
Ad

Ad

Ad
Ad

Ad

Ad

Ad

Ad
Ad

Ad
Ad
Ad
Ad
Ad
Ad

Ad
ng

ng

ll
ll

ll

ll

ng

ng
ng

ng
ll
ll
ng
ng
ll

ll
Fa
Fa

Fa

Fa

Fa
Fa
Fa

Fa
ri

ri

ri

ri
ri

ri
ri
ri
Sp

Sp

Sp

Sp
Sp

Sp
Sp
Sp

Exam Date
Figure 3. Longitudinal study of Institution X’s performance in
Structural Engineering

Subject: Structural Engineering Inst X Results


Major: CVLE Exam: CVLE Expected Goal

1.30

1.20
Ratio: Inst X to Comparator

1.10

1.00

0.90

0.80

0.70

0.60
4
3
2

8
1

5
2

8
1

7
3
in

in

in
in

in

in

in

in
in

in
in
in
in
in
in

in
m

m
m

m
m

m
m
m
m
m
m

m
Ad

Ad

Ad
Ad

Ad

Ad

Ad

Ad
Ad

Ad
Ad
Ad
Ad
Ad
Ad

Ad
ng

ng

ll
ll

ll

ll

ng

ng
ng

ng
ll
ll
ng
ng
ll

ll
Fa
Fa

Fa

Fa

Fa
Fa
Fa

Fa
ri

ri

ri

ri
ri

ri
ri
ri
Sp

Sp

Sp

Sp
Sp

Sp
Sp
Sp

Exam Date

Regarding these two examples, one could draw the following conclusions:

Institution X initially assumed that its civil engineering students should score
10 percent higher in Mathematics and Statistics than the comparator average for
civil engineering students. (Recall that the Subject Matter Report provides only
comparator performance data for students in the same major.) The authors would
argue, however, that this is a somewhat lofty and perhaps unobtainable goal for
Institution X since the level of coverage in this topic does not vary much from
institution to institution.

After three exam administrations below the expectation level for Mathematics
and Statistics, the faculty made two modifications. The first was a reexamination
of the goal described above. Given that Institution X’s students have no
additional coursework in this subject beyond what would normally be taught
elsewhere, the faculty adjusted their expectation and set a new goal for their
students to meet the comparator average for this subject. This type of “close the
loop” feedback is certainly acceptable. It also appears that some modification
was made to the curriculum (such as a change in instructor, textbook, or course
design), since the ratios since Spring Admin 2 have been very near or above the
faculty expectation level except for one exam administration in Spring Admin 7.

8
For Structural Engineering (Figure 3), note that Institution X has an
expectation that its students perform at a ratio of 1.05. This is due to the
fact that it emphasizes this subject material and perhaps requires the
students to take more structural analysis courses than would be expected at
other institutions. The performance on this subject is sporadic, with ratios
above 1.25 and as low as 0.88. One possible explanation for this irregular
performance is the small number of students who take any one particular exam.
This can be accounted for in the scaled score approach discussed next. This
type of performance also points out a suggestion that is made by the authors:
put the subject matter on a watch list if it falls below the expected goal for two
consecutive exam administrations, but do not attempt a curricular change in
a subject matter area unless the students’ performance has been below the
expected goal for three consecutive exam administrations.

To smooth out the performance—especially in subjects that might be covered very


late in the curriculum—one can also average the Fall and Spring results over a
particular academic year and plot the yearly average ratios. This is shown in Figure
4 for the Structural Engineering topic. In this case, the authors would suggest that
a subject should go on the program’s watch list if it falls below the expected goal for
one academic year but that a curricular change in a subject matter area should not
be attempted unless the students’ performance has been below the expected goal
for two consecutive academic years.

Figure 4. Academic year averaging

Subject: Structural Engineering Inst X Results


Major: CVLE Exam: CVLE Expected Goal

1.20
Ratio: Inst X to Comparator

1.10

1.00

0.90

0.80

0.70

0.60
4
2

8
1

7
3
AY

AY
AY

AY

AY
AY
AY

AY

Academic Year (AY)

9
Subject Matter Report, the standard deviation (σ) that is reported is based on the number of
questions correct, not on the percentage correct. For example, from Table 1, the national
performance for Engineering Economics is an average of 8 questions correct (80 percent of 10

Scaled score method


questions) with a standard deviation of 2.3 questions. Further examination of Table 1 reveals
that in all subjects, ±3σ effectively covers the entire range from 0 percent correct to 100 percent
correct. In the Subject Matter Report being developed for the CBT exam format, the standard
deviation will be based on the 0-15 scale discussed earlier.
The concept of the scaled score method was developed to assist institutions that have
a relatively
The scaled small
score was number
developed of examinees
to allow during
institutions each
to do the reporting period. This method
following:
requires the use of the standard deviation of the examinees’ results for each topic.
In the• CBT Subject
Present Matter
the data Report,
in a form thatthe standard
represents thedeviation
number of is based on
standard the 0–15
deviations
above or
Performance below
Index the national
scale discussed average for Standard
earlier. each topic deviation
(as compared to the
of the Performance
Index may percentage above or
be relatively below
high the national average
in specification topicsgiven by thefew
that have method)   (standard
ratioquestions
 
deviation may range above 5).
• Allow  a  range  of  uncertainty  in  the  institution’s  performance  to  account  for  
small  numbers  of  examinees  
The scaled score was developed to allow institutions to do the following:
For the Subject Matter Report for pencil-and-paper FE exams, the scaled score is defined as
follows: Present the data in a form that represents the number of standard deviations above
or below the national average for each topic (as compared to the percentage above
# correct at Univ X − # correct nationally
Scaled scoreor= below the comparator average given by the ratio method) Jennifer Willia
national standard deviation
Comment [8
Allow
# ofaquestions
range of(%uncertainty − %institution’s
inXthe
correct at Univ correct nationaperformance
lly) to account for small formula  that  n
= next  page  (see
100 * national standard deviation
numbers of examinees
For the Subject Matter Report for CBT FE exams, the scaled score is defined as follows:
The scaled score is defined as follows:

topic  score  for  Univ  X


PI for Univ X − topic  score  national
PI comparator
Scaled score ==
Scaled  score
topic  national  standard  deviation
 
PI comparator standard deviation

PI = Performance Index

  11  

10
The range of uncertainty comes from the following derivation.
The range
TheThe ofof
range uncertainty
range comes
of uncertainty
uncertainty from
comes
comes the
from
from following
thethe derivation:
following
following derivation:
derivation:
From the concept of confidence interval on a mean:
From
From the
From the concept
the
conceptconceptofof confidence
of confidence
confidence interval interval
interval onon a mean:
on
a mean: a mean:
The rangeThe of uncertainty comes from the following
The range derivation:
of uncertainty comes from the The range ofderivatio
following uncerta
The The
The The
range
The mean
mean
The range
meanofmeanrange
of aof
uncertainty
ofof aof ofa uncertainty
uncertainty
apopulation
population comes
population
population (µ) (comes
(µ) from
is)(µ)comes from
isthe
related
related
is related from
tothe
following
related the
to the
following
tothetothe
mean following
the
mean ofderivation:
derivation:
mean mean ofaof derivation:
a asample
sample
aof sample size
sample size
size (x)
nsizen (x) n((x)
nby by) byby
The range of uncertainty comes from the following derivation:
The range of uncertainty comes from the following derivation:
The range of uncertainty comes from the following derivation: on a the
From the From
From x −xFrom
the
concept
the
concept σ≤ of
σ concept
the σ
µof
confidence
concept
≤x≤of
µconfidence +of
µ+x≤confidence
interval
σ σ interval
+αconfidence
Zinterval σ on From a mean:
interval
on
the
on aon
a mean: mean: a mean:of confidence intervalFrom
concept mean: concept of
From −xαZ/2
Z
the − Zα /2
concept
α /2 of≤
≤ confidence Z xαZ/2 /2 interval
α /2 on a mean:
TheFrom meanthe ofnconcept
amean n of confidence
npopulation (µ) isnrelated n to The
n interval theonmean a mean: of amean sample
population size n (x)
(µ) is byrelated toby
The themeanmean ofof a sa
a pop
The mean The The mean
of From of athe
a population of a concept
population
population (µ) is(µ) (µ)related
is
related
of istorelated
confidence thetomean to mean
the
interval theof aon of
ofaamean:
sample asize
sample sample (x)size
nsize nby(x)nby (x)
The mean of a population (µ) is related to the mean of a sample size n (x) by
The mean of a populationσ(µ) is related to the mean σ ≤ of a sample size (x) by
σ an sample
x − Zα /2 xσσ− Z≤ The µσ ≤ xσ mean+ Zαµ/2 of aσpopulation
≤ + σ σ double-sided
xσ(µ)
−σZis ασ/2
related to
µ ≤the x +mean Zα /2 of −byZαn./2(x)σby≤ µ
x size
x − Z −
xαwhere Z σ
n
α ≤ µ ≤ ≤ x
α /2 interval µ
relates+ ≤ Z x +
to x
Z the
σ
n
α Zdesired
α± /2Z± ± . . . n confidence intervaln given
or, or,the or,
the confidence
/2the /2
confidence
confidence
x − Zα /2 n ≤σµn ≤ xn + Zα /2 n σn α /2n α /2
interval
interval on on µ /2µ
is
on is µ is Z Z
α /2 α /2 n
xThe − Zvalue
ααn/2
/2
can ≤µ be≤determined
xσ + Zααn/2 /2 from the n n unit
σ
n normal distribution table for any
given value x −ofZα./2
n ≤ µ ≤ x + Zα /2
n
n σUniv σUniv σXUniv σ . n σ σ .
LetLet
or, YUniv
the Let YUniv
Yconfidence=thex=Univ ±±
= interval
xUniv
xconfidence ZXαZ/2 ±αon Zαµ/2is ±±on ZXα /2X or, σ±the σconfidence interval on µ is ± Zα /2
or, the X
or,
Univ
or,confidence X
the confidenceX UnivX X
interval interval interval
/2
onnµ non is nµ isα /2 σ
Z µ ± Z
is α /2Z
n . α /2 . . or, the n confidence
or, the confidence interval on µUniv X±Univ
isUniv ZXα /2X n . σn n
Thus,or,the theconfidence
confidenceinterval interval on on µ isis ααn/2 ± Z /2 .
.
σUniv Xinterval n ±Z σ . σUniv X
or, the confidence σ σ on µ is α /2
Let YUniv XLet = xYUniv X ±= Zx α /2 σ σ± Univ XUnivσUniv σLet
XUniv Xσ
UnivX Y
Univ
X X = x n± Zα /2
YLet Let
YXUniv
YUniv YUniv−
X x
Y Xx=−
−XUniv xUniv =
x=Natl
xUnivx =±XxUniv
XXUniv Z−Xαx
Univ ± XxX−
−/2NatlZ Z
±Univ
xUniv
αn/2 Z ZX±α /2Zα /2
±αα/2 Univ
. . . X Univ X
nUniv X Let YUniv X = xUniv
LetUniv
Let YUniv =Natl
Natl
xXUniv Univ
Univ X ± Zα /2 nUnivσn
X Natl Natl X/2
UnivnXX
Univ
Univ nUnivnUnivXXnUniv
Let YXUniv Univ XX = xUniv Univ XX ± Zα αn /2Univ X
/2
X XUniv X X
σUniv X
Let YUniv X = xUniv nXUniv ±σZXUniv
Univ X
α /2X σUniv X
YUniv X − xYNatl = x−Univ Xx −= xxNatl ± Z−αx/2 σ σ± Univ σXUniv
YUniv .XnσUniv
−XUniv XX =
x Natl . xUniv X − x Natl ± Zα /2 .
Y Y − x − x= x x = − − x ± Z ± Z Z . . it−be Natlbe= x
Since
Since NCEES
Since NCEESUniv
NCEES X
does does Natl
not
does not provide
not Univ
provide X
provide α
standard Natl n
α
standard
standard Univ α
deviation/2
X deviation
deviation data data for
dataforindividual
for individual
individual institutions, nYUniv
institutions,
institutions, it it
will xbe
will
Univ X Natl Univ X Natl /2 XXwill
X − x Natl = xUniv X − x Natl ± Zα /2 nUnivσn
Univ X Natl
Comp Univ X Natl
Comp /2
YUniv Univ XUniv
Univ
Univ. nXXXUniv X Univ
assumed Yassumed
assumed Univthat
Univ − xthe
XX that Natl
that
the
Natl =
national
the xUniv
Univ
national
national −standard
XXstandardx Natl
Natl ±deviation
standard Zdeviation
αn
α /2
/2
X
X can
deviation can be .can
besubstituted
be substituted
substituted for forthefor
the institution’s
the institution’s
institution’s standard standard
standard
Univ
nUniv
Univ XX σUniv X
deviation. deviation.
deviation. InIn that Y
InUniv
that case,
that X −
case, x Natl = xUniv X − x Natl
case, ± Zα /2 .
Since NCEES does not provide standard deviation
Since NCEES data for
does nUniv
individual
not X provide institutions,
standard it will be it
deviation data for
SinceSinceNCEES Since
NCEES Since
NCEES
does NCEES
doesnotdoes not
provide does
not not
provide provide provide
standard
standard standardstandard
deviation
deviation deviation deviation
data datafor data data
forindividual
individual for individual
for individual institutions,
institutions,
institutions,
institutions, it willitbe will will
be b
Since
assumed NCEES that does
the not
national provide standard standard deviation deviation
assumed can bedata
that for national
substituted
the individual for institutions,
the
standard institution’s Since
deviation it willNCEES
standard
canbe be subsdoes
Y Y
it willassumed
be
Univ SinceY
assumed
assumed
Univ −
X Univ −
x x
assumed
NCEES
X thatNatl
X − the
Natl x
that
that
Natlxthat
doesx
the
national
the
UnivUniv x
the
not−
national
Xcomparator
Univ −
x x −
national
provide
X standardNatl
X x
standard Z
NatlNatl deviation Z
standard
standard
α /2 deviation
Z deviation
α /2 α /2 can deviation can can
be
data
be substituted be
for substituted
substituted
individual for for
the for the standardstan
the
institutions,
for the institution’s
can be substituted institution’s
institution’s it standard
will be
assumed
deviation. that the=
In that case, = =
national standard ± ± ±
deviation . . can .
deviation. In that case,be substituted for the institution’s assumed standard that the
institution’s σ
deviation.
assumed
deviation. Natl
Natlσ
σ standard
deviation.
InNatl that
that the σthat
InNCEES
deviation.
In case,
that σ Natl
case,
national
Natl σ Natl
case, Instandard
that case,
nUniv deviation
XnUniv
nprovide can be substituted for the institution’s standard
deviation. In that case, Since does not Univ X X standard deviation data for individual
deviation. In thatinstitutio
deviation. In assumedthat case, that the national standard deviation can be substituted for the institu
YUniv X − xYNatl x
−xxUniv − x Z
− x Natl Y − x x − x Z
YUniv Y XUniv
Comp
− xXUniv
Natl− Xx=Natl Univ
Natlx
X
XUniv − xXUnivComp
Natl
Natl− Xx±Natl Zαα/2/2 ZUniv α. ZXα /2 Natl = Univ X Natl
± α /2
.
YNormally,
Univσ
Normally, − xfor
Normally, for a deviation.
=a 99%
99%
for =99%
xaUniv =−confidence
σconfidence
confidence In
x Natlthat case,
±interval
interval Z±α /2Z±α/2
ninterval Zα/2 /2
.would
Zσwould
α/2 be.be
. would 2.58.
2.58. σ Natl
However,
be 2.58. However,
However, inin this incase,
this this
case,
nUniv the
Ycase,the − xuncerta
uncertainty
the
uncertaint
XNatl
Natl σ
σYComp −σxNatl
Natl
=
XNatl
Natl σ
σxComp −σxNatl ±
Natl nUniv XZ n . n Natl Univ
X X Natl
would
would σ
would
Univ
Univ
be be
XX Natl
soso large
be
Natl
that
so large
large =
that σ
the
that
Univ
Univ
the
XXNatl
analysis
the
analysis results
analysis ±
Univ
results X
(see
results
α
Univ/2
α /2 X .
(see Univ
below)
(seebelow) X
forfor
below) allforsubjects
all all subjects
subjects would
would indicate
would indicate that
indicate that no
that
noaction =
no actio
action
Natl
σ Natl Natl
σ Natl
nUniv X
nUniv σ Natl
Natl Natl
Univ X − −Univ
needs toto bebe beYconsidered.
considered. The xThe authors xUnivfeel x NatlXXthis Zisα /2
needs needs to considered. Natl Theauthorsauthors Xthat
feel feel
that this
that isthisunreasonable
is unreasonable
unreasonable andand suggest
andsuggest using
suggest using a value
usinga value ofof of
a value
Normally,Normally, for a 99%for confidence =
interval Z would
Normally,
α/2 ±be 2.58.
forwould
a 99% .confidence
However, in this case,
interval Ztheα/2 uncertainty
wouldthe be unc
2.58
Normally, =
ZNormally,
Z Normally,
Z
α/2α/2 α/2 for
1.0.
= =a
1.0. 99
This
for
1.0. a
This for
percent
allows
99%
This a
allows σ
99%
a a 99%
confidence
allows
Natla confidence
confidence
reasonablea
reasonableconfidence
interval
reasonable σ
interval
interval
amount
amount interval
amount
Z
Natl of
α/2 α/2 Z
uncertainty
would
of of Z would
would
uncertainty
α/2be 2.58.
uncertainty
n2.58.bebe
based be
2.58.
2.58.2.58.
on
However,
based based on However,
However,
However,
theon
the number
in this
the
number in in in
this
of
case,
number
Univ XHowever, in this case, the uncertainty
this
this
of case,
theof case,
case,
students the
students
students uncerta
taking
uncertainty taki
taking
Normally,
would be so for
large a 99% that confidence
the analysis interval
results α/2 would
Z(see
would below) be so be
for
largeall subjects
that the would
analysis indicate
results that
(see no
below)action for al
the uncertainty
the
would
the Normally,
would
exam
the
exam would
atbe
beexam
so would
any
large
at so
anyfor
be
at abe
so
large
specific
that
any 99% so
large
that
the
specific
specific large
confidence
that that
the
theinstitution.
institution.
analysis
institution.analysis the
analysis
results analysis
interval
results
(see α/2 results
Zbelow)
results (see
α/2 would
(see
below) be
forbelow)(see
all for2.58.below)
for
all
subjects However,
subjects forwould
all would
subjects allinwould
this
indicate Normally,
subjects case,
indicateindicate
that the
that
no for ano
uncerta
that 99
no actio
uncertai
action
would would
needs indicate be
to be so large
considered.
that that the analysis
Theneeds authors results
to feel (see
that
needs below)
thistois for all
beunreasonable
considered. subjects and
The wouldsuggest
authors indicateusing
feel that
that no
a value
this action
isoflarge
unrea
needs needs
would to needs
bebe to betono
so beaction
large
considered. considered.
considered.
that Thethe The The
authors
analysis
authors be
feel considered.
authors feel feel
results
that (seethat
that
this this The
below)
is is authors
this is all
unreasonable
for
unreasonable feel
and that
unreasonable
subjects and
would
suggest this
and
suggestiswould
suggest
indicate
using using be
that
a value aso
using value
no oft
ofaactio
valu
Zneeds
unreasonable =
α/2 needs to
1.0. be
Z and
This
to
considered.
=besuggest
allows
Normally,
considered.
1.0. Thisa The
using
reasonable
allows for authors
a
The a avalue
99%authors
feel
amount
reasonable of that
confidenceZ of
α/2
feel
this
uncertainty
=
that
amount 1.0. is.
interval
this
unreasonable
This
This
of is allows
based
Z
allows α/2
unreasonable
uncertainty ona
would
a and
based be
reasonable suggest
reasonable
the number
and 2.58.
on
using
amount
of students
However,
amount
suggest
the number ofa
using
value
of
of in
uncertainty
a
of
taking
this
value
students ca
of b
Z
Zα/2 = 1.0. =
α/2 α/2 1.0. This allows a reasonable amount
This allows a reasonable amount of uncertainty based on the number of students taking of uncertainty based on the number needs of to
studentsbe consid
taki
uncertainty
Z
the =
exam
Therefore, 1.0.based
Therefore, This
attheany on
allows the
specific
scaled
the anumber
scorereasonable
institution.
is is of
calculated students
amount asthe oftaking
uncertainty
exam at the exambased
any specific at onany the specific
number
institution. institution.
of students taking
Therefore,
α/2
the examZthe α/2 the
α/2 exam
=at1.0. the
exam
any at
This atscaled
scaled
would any
specific be
any score
specific
allows so score
a large
specific isinstitution.
calculated
calculated
that
institution.
reasonable
institution. the
amountas as
analysis results
of uncertainty (seebased
below) on for theall subjects
number would
Zα/2of=students
1.0. This indica
all
taki
the exam at anyneeds specific institution.
to Xxbe
the exam at xUnivxany −specific
X x−Natl
XxUniv
Univ − xconsidered.
Natl institution.The authors feel that this is unreasonable
Natl the exam at any su
and suggest
Scaled score
Scaled
Scaled = = =
score
score
Zα/2σ Natl σ Natl
=σ1.0.
Natl This allows a reasonable amount of uncertainty based on the number o
Therefore,Therefore,
the scaledthe score is calculated
scaled score asTherefore, the scaled score is calculated as
Therefore,
Therefore, the
the #scaled
the scaled
score score
is isafor
calculated ais*topic
calculated
calculated
* as eas atas −X % %−correct
− Xcorrect
= exam at any specific institution.
of questions
# of #questions
of for
questions a topic
for topic (% corr
(%*corr
(% ctcorr
ect eUniv
at at XUniv
ctUniv nationally)
% correct nationally)
nationally)
Therefore, the scaled
= x= score is calculated
− x Natlscore100
as xdeviati −onx Natl . . .
Therefore, xthe
Univ Xscaled
x − xx − x − x is 100* national
100
* * standard
national
national
calculated standard
as deviati
standard o
deviati n
Univon
X
Scaled score = Univ X Univ Natl XUniv XNatl Natl Scaled score = Therefore, the sca
Scaled
ScaledScaled
score = xscore
score −=x Natl σ
=σXNatl σ Natl
Scaled score =
Univ
σxNatl σ
Univ XX − x
Natl Natl
Natl
σ Natl
Univ
Scaled score# of=questions
Natl
for a topic * (% corr e ct at Univ X −# %
of correct nationally)
questions correct
for a topic * (%Scaled at Univ= XxUn

AndAndthe
And
therange
the
range #Therefore,
= range
of uncertainty
of
of of ofthe
σquestions
#Natl
# ofuncertainty
uncertainty
questions
=
Natl for scaled
questions
for
a forthe
for afor
for
topicthe* score
a corr
scaled
the topic
scaled
topic
(% isect
*scaled
(% *calculated
score (%
score
corr
at ctisate=ct
corr
eis
score
Univ is atas

Univ
X %Univ−%
Xcorrect− nationally)
X correct
% correct .
nationally)
nationally)
.
score
= # of=questions for100
a100 * national
topic * (% corrstandard
ect at deviati
Univ X − o%
n correct nationally) . standard
100 *.national deviati
for**a100 *100
national *standard
nationalnational
corrstandard
standard
ect deviati onX deviati
deviati on on
±± ± 1= 1 =1 # of questions
. . = .100xUniv topic
Xnational
* (%
− x Natl standard at Univ
deviati
.
on − % correct nationally) . 11 # of
Scaled score 100 * national standard deviation =
# of takers
# of#takers at
of takersUniv X
at Univ
at Univ X X σ Natl
And the range
And of uncertainty
the range for the scaled
of uncertainty forAndscore
the theisrange of uncertainty for the scaled score is
Andrange
And the the range
And the range
of uncertainty
of uncertainty for for
# ofthe
of uncertainty=for the
the
scaled
questions forscaled
scaled
score
scaled score
is score
a score is is
is 1* (% correct at Univ X − % correct nationally) .
topic
1
± And the range of 1 . for the ±
uncertainty scaled100 score is
* national . deviation
standard
topic  score  for  Univ  X − topic  score  national Commen
𝑆𝑆caled  score =correct at Univ X − % correct nationally)
# of questions (%   formula  th
=
For the Subject Matter Report topic  national  standard  deviation
for CBT FE exams,
100 * national standard deviation
the scaled score is defined as follows:
next  page

ForThe
the scaled
Subjectscore isReport
Matter definedforasCBT
follows:
FE exams, the scaled score is defined as follows:
andFor
thethe
range of uncertainly
Subject Matter for the
Report scaled
for score
CBT FE is the scaled score is defined as follows:
exams,
topic  score  for  Univ  X − topic  score  national
atter Report for CBT FE exams, the scaled score is defined as follows:
𝑆𝑆caled  score =  
ject Matter Report for CBT FE exams, the scaled score Xis −
topic  score  for  Univ  X
PI for Univ PI1comparator
topic  national  standard  deviation
defined as follows:
topic  score  national±
Scaled
e Subject Matter Report CBT=
Scaled  score
score
for =FE exams, the scaled score is defined as follows:
#  𝑜𝑜𝑜𝑜  𝑡𝑡𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎  𝑎𝑎𝑎𝑎  𝑈𝑈𝑈𝑈𝑈𝑈𝑈𝑈  𝑋𝑋
topic  national  standard  deviation
  .
topic  score  for  Univ  X − topic  score  national PItopic  score  for  Univ  X
comparator standard deviation − topic  score  national
d  score = 𝑆𝑆caled  score = for the scaled    
and the range of uncertainly
topic  national  standard  deviation
topic  score  for  Univ  X score is
topic  national  standard  deviation
− topic  score  national
Generally,
𝑆𝑆caled  score = it is more common for faculty
topic  score  for  Univ  X to set student
− topic  score  national   goals (expectations) based on the ratio
𝑆𝑆caled  score
score rather = topic  national  standard  deviation
than on the number of standard deviations 1  above the national average. As shown
ncertainlyand for the
the scaled
range score of is
topic  national  standard  deviation ± score
below, andtherethe range of uncertainly
is a mathematicaluncertainlyfor forthe thescaled
relationship scaled betweenscoreis is
the ratio goal and the scaled score goal.
#  𝑜𝑜𝑜𝑜  𝑡𝑡𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎  𝑎𝑎𝑎𝑎  𝑈𝑈𝑈𝑈𝑈𝑈𝑈𝑈  𝑋𝑋
ge of uncertainly for the scaled11score is
Thus, the ratio goal can be used to estimate an associated scaled score goal.
± for the scaled score is.
e range of uncertainly 1
#  𝑜𝑜𝑜𝑜  𝑡𝑡𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎  𝑎𝑎𝑎𝑎  𝑈𝑈𝑈𝑈𝑈𝑈𝑈𝑈  𝑋𝑋
1 ±
Generally, it is more common for faculty to set student goals (expectations) based on the ra
The scaled score can be rearranged ± 1 asof standard #  𝑜𝑜𝑜𝑜  𝑡𝑡𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎  𝑎𝑎𝑎𝑎  𝑈𝑈𝑈𝑈𝑈𝑈𝑈𝑈  𝑋𝑋
score rather than ±on thegoals number
#  𝑜𝑜𝑜𝑜  𝑡𝑡𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎  𝑎𝑎𝑎𝑎  𝑈𝑈𝑈𝑈𝑈𝑈𝑈𝑈  𝑋𝑋 deviations above the national average. As sho
re common for faculty
Generally, it to
is set
more student
common (expectations)
for faculty to set based
student on the goalsratio(expectations) based on
below, there is a #  𝑜𝑜𝑜𝑜  𝑡𝑡𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎  𝑎𝑎𝑎𝑎  𝑈𝑈𝑈𝑈𝑈𝑈𝑈𝑈  𝑋𝑋
mathematical relationship ! between the ratio goal and the scaled score goa
on the number of
the ratiofor standard
score deviations
is rather than above
on the the national
number
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 =of average.
!"#$
tostandard As shown
deviations
set(𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 above the national
−ratio
1).
t is moreGenerally,
common
Thus, the itratio
facultymore to set
goal common
can student
be used for
goals faculty
to (expectations)
estimate student
! anscore based
associated ongoals (expectations)
thescaled score goal.based on the rat
mathematical relationship
average. Asofshown between below, thethere
ratio goal
is aof and the scaled
mathematical relationship goal. between the ratioaverage.
goal
rally, it on
than score
is morethe rather
common
number than
for on
faculty
standard the to number
set student
deviations above standard
goalsthe (expectations)
national deviations
average. basedabove
As on the
shown the national
ratio As show
al can be used to estimate an associated scaled score goal.
e is a In
rather than and
that
below,
mathematical the
oncase, scaled
thethere the
number is score
scaled
relationshipa goal.
score
ofmathematical
standard
between Thus,
goal the
deviations
the isratio ratio
then above
relationship
goal goal
related
andthecan tobe the
national
between
the scaled usedratio
average.
thetoratio
scoreestimate
goal as
Asgoal
goal. shown an
and associated
the scaled score goal.
The scaled score can be rearranged as
there
atio goal is can
an be rearranged scaled
aThus,
mathematical
be score
used
the to goal.
relationship
estimate an between
associated the
scaled ratio score
as ratio goal can be used to estimate an associated scaled score goal.
goal and
goal. the scaled score goal.
the ratio goal can be used to estimate an associated scaled!!"#$
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠  𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 = score
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠
! !"#$
goal.
(𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠  𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔
(𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 −−1).1).
score can be The rearranged
scaled score as !can !"#$ be rearranged as !=
The scaled score
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 = can be rearranged
(𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 − 1). as !
aled score   can be rearranged as ! 11  
!!!"#$
Unfortunately,
In that the term
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠
case, the = Comp
scaled !"#$
depends on
(𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠
score goal is each
−then
1). related subject
!!"#$ to asthe well as the
ratio goalresults
as of each exam
caled score goal is then related to the !! ratio!!"#$goal as
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠
administration.𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠
The value of= this (𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 − = 1). ! (𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 − 1).
! term generally ranges between 2 and 4. The authors suggest
, the using
scaled score goal is then related
!!"#$ to convert
the ratio goal as !!"#$
an average
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠  𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 value
= of 3 to a−ratio
1). goal
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠  𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔
(𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠  𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 = to an(𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠  𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔
associated scaled−score 1). goal. That is, if
t case, theIn scaled
In that
that score
case,
case, goal
the is!then related
scaled
scaled score
score goalto the
goal is isratio
thenthen goal related
related as to!the towould
the
ratio ratio
goal goal
as as
the ratio goal is 1.05, the !associated !"#$
scaled score goal be 0.15.
! 𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠  𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 = (𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠  𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 − 1).
! !!Comp
e term !"#$Unfortunately,
depends on eachthe subjectterm
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠  𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 as
= well!"#$ as the results of each exam
!"#$
depends on each
(𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠  𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔!− subject as well as the results of each exam
1).
!"#$
!
As
he value of this discussed in the section
term generally ranges between
!!"#$
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠  𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔
on !
!
ratio scores,
2 and 4.the The=authors
authors (𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟  𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠  𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔
suggest that an institution
suggest − 1). put the subjec
!
ely, the term administration.
depends onThe eachvalue subject ofas thiswelltermas the generally
results of rangeseach exam between 2 and 4. The authors sugge
value matter
of 3 to convert on! a!watch a
!"#$ ratio list
goal if it
to falls
an below
associated the
scaled expected
score goal
goal. for
That two
is, if consecutive exam
tunately,
ion.the
05, The using
the
value term
associated of anthis average
! term
scaled
depends
score value
generally on each
goal of 3subject
ranges to convert
!"#$ bebetween
!Comp
would 0.15. 2aand
as well as ratio
the goalauthors
results
4. The toofaneach associated
exam
suggest scaled score goal. That is
erage administrations
value
istration. The Unfortunately,
Unfortunately,
the value
of 3ratio of
to goal
convert but the
the
thisisterm
a not term
term
1.05,
ratio attempt a
the !associated
goal
generally to an curricular
depends
depends
associated
ranges between on
on change
each
each
scaled2score
scaled and 4.goal
score in
subject
subject a subject
as
goal. as well
wouldas
well
That
The authors matter
asif the
the
is,be
suggest results
0.15. areaofunless
results of each
each theexam
student
al is
an performance
1.05,
average the
exam of 3has
associated
administration.
administration.
value been
scaled
to convert The below
score
The
avalue
ratio the
goal
value
of
goal expected
would
thisof
to an be
this
term goalgenerally
0.15.
term
generally
associated for
scaled three
ranges consecutive
ranges
score goal.between
between That 2is, exam
2
and administrations.
ifand4.4. The
The authors sugges
e section on ratio scores, the authors suggest that an institution put the subject
tio goal isusing 1.05, the an associated
average scaled
value ofscore
3 to goal
convert would abe 0.15.
ratio
list if it falls authors
Asbelow
discussedsuggest
the expected using
in the goal an
section average
for two value
on consecutive
ratio scores,of 3 to the authors suggest that an institution put theis,s
goal
convert
exam to aan associated
ratio goal to anscaled score
associated goal. That
d in the
For section
the
the
scaledratioon
same ratio
goal
score topics scores,
is
goal. 1.05, the
previously
That the
is,authors
associated
if the suggest
discussed,
ratio goal that
scaled
the
is an institution
score
scaled
1.05, thegoal
score put
would the be
graphs
associated subject
0.15.
are
scaled shown
score goal andwould
some
ut not attempt matter a curricular
onon a ratio
watch change listinifthe
aitsubject
falls matterthe
below areaexpected
unless the students’
goal for two
cussed
watch in listthe
if itsection
falls below the scores,
expected authors
goal for two suggest that anexam
consecutive institution put theconsecutive
subject exam
observations
been below be 0.15.
the expected
administrations are as
goal follows:
for
but three
not consecutive
attempt afor exam
curricular administrations.
change inthea subject
rions
on abut watchnot list if it falls
attempt below
a curricular thechange
expected in agoalsubject two
matter consecutive
area unlessexam students’matter area unless the stu
As discussed inhas
performance thebeen section below on ratio
the scores, the
expected goal authors
for three suggest
consecutivethat an institution put the su
istrations but not attempt a curricular change
e has been below the expected goal for three consecutive exam administrations. in a subject matter area unless the students’exam administrations
cs previously As
matter discussed,
discussedon
For a the
in
watch
Probability scaled
the section
list score
if
andit on graphs
falls ratio
below
Statistics are
scores,
• below the expected goal for three consecutive exam administrations. shown
the
(shown the and
expected
in some
authors
Figure suggest
goal for
6), a twothat
ratio an institution
consecutive
goal examput
of 1.1 translated to a
mance has been
s follows:administrations
the subject matter on
butthe a
not watch
attempt list if it
agraphsfalls
curricular below the
change expected
in goal
a subject for two
matter consecutive
area unless the stud
me topics previously For the scaled score
discussed,
same topics goal of
scaled
previously 0.30, and
scorediscussed, a ratio
are showngoal
the of
and
scaled 1.0 translated
some
score graphs toare a scaled-
shown score goal
and some of
e same
ns are astopics exam
performance
follows: administrations
0.0.
previously has
After been
the
discussed, butthenot
below
reduction scaledattempt
the in expected agraphs
expectation
score curricular
goal
arethatforchange
shown three
occurred
and inconsecutive
a subject
somefor the matter
exam
April administrations.
2007 exam
bability and observations
Statistics (shown are as follows:
in Figure 6), a ratio goal of 1.1 translated to a
vations arearea unless the students’
as follows: performance has within
been below the expected goal for three
core goal of 0.30,administration,and a ratio goal ofone can see that,
1.0 translated to a scaled- the range
score goal ofof uncertainty provided by this
or Probability consecutive
For thein and same
analysis exam
Statisticstopics administrations.
(shown
method, in
previously Figure 6), a
discussed,
the institution ratio goal
hasthe of 1.1
scaled
scored translated
scorethe
above to
graphs a are shown levelandforsome
er the reduction
• For
caled score goal of•0.30,
Probability
expectation
and ForStatistics
and
that
Probability
a ratio
occurred
goal and
(shown ofin1.0
for the April
Statistics
Figure 6), a (shown
translated
2007
ratio
to a scaled-
exam
goalin ofFigure
1.1
score 6),expectation
translated
goal a ratio
of
all exam
to a goal of 1.1 translated to
stration, one observations
can see that, are
administrations
scaled as
within follows:
score the except
range
goal of
of for April
uncertainty
0.30, 2012. provided by this
scaled
.0. After thescore thegoal
reduction of 0.30,
in and
expectation a ratiothatgoaloccurredof 1.0 and a ratio
translated
for the April goal
to 2007 of
a scaled-
exam 1.0 translated
score of toon
goalshown a scaled- score goal
method, For the institutionsame topics
has scored previously
above the discussed,
expectation the scaled
level forscore
all examgraphs are the
0.0.
dministration, After the
facingone 0.0.
reduction After
in the
expectation reductionthat in
occurred
can see that, within the range of uncertainty provided by this
page. expectation
for the Aprilthat2007occurred
exam for the April 2007 exam
strations except • for •For April
For 2012.
Probability
Structural Analysis and Statistics
(shown in (shown
Figure in
7), Figure
a ratio 6),
goala ratio
of 1.05 goal of 1.1
translated translated
to to a
a scaled
administration,
nalysis method, the one can
institution see that,
has within
scored the
above range
the of uncertainty
administration, one can see that, within the range of uncertainty provided
expectation level provided
for all examby this by thi
analysis method,
dministrations except scaled
the score
forinstitution
April
analysis 2012. goal
method, of 0.30,
has scored andthe
above a ratiohasgoal
expectation oflevel
1.0 translated
for all exam to a scaled- score goal o
uctural Analysis (shown in Figure 7), a ratiothe goal institution
of 1.05 translated scored above
to a scaled the expectation level for all exa
administrations except 0.0. After
for Aprilthe reduction
2012.
administrations except for April 2012. in expectation that occurred for the April 2007 exam
 
or Structural Analysis (shown in Figureone
administration, 7), acan ratiosee goal of 1.05
that, 13  translated
within the range to a scaled
of uncertainty provided by this
• For Structural Analysis analysis (shown
method, in Figure 7), a ratio goal
the institution hasof scored
1.05 translated
above the to a expectation
scaled level for all exam
• 13  
For Structural Analysis (shown in Figure 7), a ratio goal of 1.05 translated to a sc
administrations except for April 2012.
13  
12 13  
  • For Structural Analysis (shown in Figure 13  7), a ratio goal of 1.05 translated to a sca
Figure 5. Scaled score results for Institution X’s performance in
Mathematics and Statistics
Subject: Mathematics and Statistics High Goal
Major: CVLE Exam: CVLE SS Low
0.80
0.60
Scaled Score (SS)

0.40
0.20
0.00
-0.20
-0.40
-0.60
-0.80
-1.00

4
3
2

8
1

5
2

8
1

7
3
in

in

in
in

in

in

in

in
in

in
in
in
in
in
in

in
m

m
m

m
m

m
m
m
m
m
m

m
Ad

Ad

Ad
Ad

Ad

Ad

Ad

Ad
Ad

Ad
Ad
Ad
Ad
Ad
Ad

Ad
ng

ng

ll
ll

ll

ll

ng

ng
ng

ng
ll
ll
ng
ng
ll

ll
Fa
Fa

Fa

Fa

Fa
Fa
Fa

Fa
ri

ri

ri

ri
ri

ri
ri
ri
Sp

Sp

Sp

Sp
Sp

Sp
Sp
Sp

For Mathematics and Statistics (shown in Figure 5), a ratio goal of 1.1 translated to a scaled
score goal of 0.30, and a ratio goal of 1.0 translated to a scaled score goal of 0.0. After the
reduction in expectation that occurred for the Spring Admin 2 exam administration, one can
see that, within the range of uncertainty provided by this analysis method, the institution has
scored at or above the expectation level for all exam administrations except for Spring Admin 7.

Figure 6. Scaled score results for Institution X’s performance in


Structural Engineering
Subject: Structural Engineering High Goal
Major: CVLE Exam: CVLE SS Low
1.00
0.80
0.60
Scaled Score (SS)

0.40
0.20
0.00
-0.20
-0.40
-0.60
-0.80
-1.00
4
3
2

8
1

5
2

8
1

7
3
in

in

in
in

in

in

in

in
in

in
in
in
in
in
in

in
m

m
m

m
m

m
m
m
m
m
m

m
Ad

Ad

Ad
Ad

Ad

Ad

Ad

Ad
Ad

Ad
Ad
Ad
Ad
Ad
Ad

Ad
ng

ng

ll
ll

ll

ll

ng

ng
ng

ng
ll
ll
ng
ng
ll

ll
Fa
Fa

Fa

Fa

Fa
Fa
Fa

Fa
ri

ri

ri

ri
ri

ri
ri
ri
Sp

Sp

Sp

Sp
Sp

Sp
Sp
Sp

For Structural Engineering (shown in Figure 6), a ratio goal of 1.05 translated to a scaled
score goal of 0.15. Using the ratio method, the students’ performance fell below the
expected goal for five consecutive exam administrations (Spring Admin 2 through Spring
Admin 4). However, using the scaled score approach, the goal was reached for Spring
Admin 3, which indicated that this subject should have simply remained on the watch list.
Subsequent results seem to indicate that the students are, once again, achieving at the
expected level, with only a couple of single points below the expectation level (Fall Admin 7
and Spring Admin 8).
Other issues
In making an assessment using the FE exam results, faculty must also consider that
some students may not have completed the coursework before taking the FE exam.
For example, some students take structural design in the spring semester of their
senior year; therefore, those who take the FE exam before taking that course will
not be prepared for that subject area.

Effective assessment should result in continuous program improvement. Faculty


should evaluate the results of student performance in individual subject areas.
Doing so will identify areas in which students are performing below the goals
established by the faculty and perhaps significantly below national averages.
Evaluations should initiate not only the necessary changes in textbooks, teaching
mechanisms, and laboratory procedures but also the possible reallocation of faculty
to improve student performance. In one documented case in which FE exam results
were used, student performance was significantly below the national average in
Hydraulics and Hydrologic Systems. The department head was surprised because
the student evaluations for the course had been very good over several years.
However, upon investigation, he found that the laboratory procedures used to
reinforce the theory were shallow and the performance demand on the students
was low. The laboratory procedures and depth of instruction were improved over
several semesters without lessening instruction on the theory. The most recent
exam administrations indicate a significant improvement in student performance in
this area. A point that cannot be overemphasized is that for assessment purposes,
the results of multiple exam administrations should be considered and the exam
content compared to the course content.

One challenge with using the FE exam for outcomes assessment is that at many
institutions, not all engineering students take the FE exam. Thus, the institution
has a self-selected sample that will, most likely, contain a higher percentage of
the best and most motivated students. Evaluation of FE exam results for this
situation can still be useful (a) if one assumes that the characteristics of the self-
selecting group stay relatively constant from time frame to time frame and (b) if the
evaluator looks at the trend of either the ratio score or the scaled score over time in
addition to the student expectations set by the faculty. That is, the ratio or scaled
score of a particular topic may always be exceeding the faculty expectation, but the
ratio or scaled scores may be declining with time. This situation would be considered
one in which curricular changes should be instituted to arrest the decline.

14
Conclusions
After much experience using the FE exam for outcomes assessment, the authors find it
to be a useful part of a balanced assessment program that includes other standardized
tests, assessment tools, alumni surveys, and placement data. The FE exam is particularly
important because it is the only nationally normed test of lower- and upper-level
engineering knowledge. The detailed reports of performance by subject area provide
information that can help to evaluate a program’s success in achieving the student
outcomes specified by ABET. Over time, these reports can also help programs document
the effects of curriculum revisions, teaching innovations, and other actions taken to
improve student mastery of engineering topics.

Based on their experience, the authors conclude the following:

Engineering programs should seriously consider using the FE exam subject-level


performance data as part of their program assessment, with proper regard for the
caveats described.

A program will gain the most from using the FE exam as an assessment tool if
it requires all students to take the exam, particularly the appropriate discipline-
specific exam, and if faculty establish specific goals for the program.

State licensing boards should be proactive in working with academic programs to


stress the use and value of the FE exam as an assessment tool.

Institutions must remember that the primary purpose of the FE exam is to assess
minimal technical competence. Other assessment tools need to be used to assess
higher-level theories or critical thought that might be the focus of some portion of
an institution’s program.

15
Appendix

FE exam specifications effective beginning with the July 2020 examinations


Listed below are the topics that the exams cover and the range of the number of questions in each
topic area. Examinees have 6 hours to complete the exam, which contains 110 multiple-choice
questions. The 6-hour time also includes a tutorial, a break, and a brief survey at the conclusion.

Chemical
Topic No. of Questions Topic No. of Questions
Mathematics 6–9 Mass Transfer and Separation 8–12
Probability and Statistics 4–6 Solids Handling 3–5
Engineering Sciences 4–6 Chemical Reaction Engineering 7–11
Materials Science 4–6 Economics 4–6
Chemistry and Biology 7–11 Process Design 7–11
Fluid Mechanics/Dynamics 8–12 Process Control 4–6
Thermodynamics 8–12 Safety, Health, and Environment 5–8
Material/Energy Balances 10–15 Ethics and Professional Practice 3–5
Heat Transfer 8–12


Civil
Topic No. of Questions Topic No. of Questions
Mathematics and Statistics 8–12 Surveying 6–9
Ethics and Professional Practice 4–6 Water Resources and
Engineering Economics 5–8 Environmental Engineering 10–15
Statics 8–12 Structural Engineering 10–15
Dynamics 4–6 Geotechnical Engineering 10–15
Mechanics of Materials 7–11 Transportation Engineering 9–14
Materials 5–8 Construction Engineering 8–12
Fluid Mechanics 6–9

Electrical and Computer


Topic No. of Questions Topic No. of Questions
Mathematics 11–17 Electronics 7–11
Probability and Statistics 4–6 Power Systems 8–12
Ethics and Professional Practice 4–6 Electromagnetics 4–6
Engineering Economics 5–8 Control Systems 6–9
Properties of Electrical Materials 4–6 Communications 5–8
Circuit Analysis Computer Networks 4–6
(DC and AC Steady State) 11–17 Digital Systems 8–12
Linear Systems 5–8 Computer Systems 5–8
Signal Processing 5–8 Software Engineering 4–6


Environmental
Topic No. of Questions Topic No. of Questions
Mathematics 5–8 Thermodynamics 3–5
Probability and Statistics 4–6 Surface Water Resources
Ethics and Professional Practice 5–8 and Hydrology 9–14
Engineering Economics 5–8 Groundwater, Soils, and Sediments 8–12
Fundamental Principles 7–11 Water and Wastewater 12–18
Environmental Chemistry 7–11 Air Quality and Control 8–12
Health Hazards and Risk Assessment 4–6 Solid and Hazardous Waste 7–11
Fluid Mechanics and Hydraulics 12–18 Energy and Environment 4–6

Industrial and Systems
Topic No. of Questions Topic No. of Questions
Mathematics 6–9 Manufacturing, Service, and Other
Engineering Sciences 4–6 Production Systems 9–14
Ethics and Professional Practice 4–6 Facilities and Supply Chain 9–14
Engineering Economics 9–14 Human Factors, Ergonomics, and Safety 8–12
Probability and Statistics 10–15 Work Design 7–11
Modeling and Quantitative Analysis 9–14 Quality 9–14
Engineering Management 8–12 Systems Engineering, Analysis, and Design 8–12

Mechanical
Topic No. of Questions Topic No. of Questions
Mathematics 6–9 Mechanics of Materials 9–14
Probability and Statistics 4–6 Material Properties and Processing 7–11
Ethics and Professional Practice 4–6 Fluid Mechanics 10–15
Engineering Economics 4–6 Thermodynamics 10–15
Electricity and Magnetism 5–8 Heat Transfer 7–11
Statics 9–14 Measurements, Instrumentation,
Dynamics, Kinematics, and Controls 5–8
and Vibrations 10–15 Mechanical Design and Analysis 10–15

Other Disciplines
Topic No. of Questions Topic No. of Questions
Mathematics 8–12 Statics 9–14
Probability and Statistics 6–9 Dynamics 9–14
Chemistry 5–8 Strength of Materials 9–14
Instrumentation and Controls 4–6 Materials 6–9
Engineering Ethics and Fluid Mechanics 12–18
Societal Impacts 5–8 Basic Electrical Engineering 6–9
Safety, Health, and Environment 6–9 Thermodynamics and Heat Transfer 9–14
Engineering Economics 6–9

17
Report Authors

B. Grant Crawford, Ph.D., P.E.


Grant Crawford is professor of mechanical engineering at Quinnipiac
University. He has served on the NCEES FE exam committee since 2005 and
is an active member of the engineering education community, having served
on the ASEE Board of Directors three times. He has been involved in and led
ABET accreditation and assessment at the program level for over 14 years. He
is an EAC PEV and ETAC Commissioner for ABET.

John W. Steadman, Ph.D., P.E., F.ASEE


John W. Steadman is dean of the University of South Alabama College of
Engineering. He also taught at the University of Wyoming for 32 years, 14 of
which he was head of the electrical and computer engineering department. He
is a past president of NCEES and IEEE-USA and an emeritus member of the
Wyoming Board of Registration for Professional Engineers and Professional
Land Surveyors. He has served on NCEES exam committees since 1986.

David L. Whitman, Ph.D., P.E., F.ASEE


David L. Whitman is Professor Emeritus at the University of Wyoming. He is a
past president of NCEES and an emeritus member of the Wyoming State Board
of Registration for Professional Engineers and Professional Land Surveyors.
He has been involved with various NCEES committees and has been part of the
FE exam preparation since 2001.

Rhonda K. Young, Ph.D., P.E.


Rhonda K. Young is professor and chair of civil engineering at Gonzaga
University. She has served on the NCEES FE exam committee since 2005 and is
an active member of the transportation engineering education community. She
has been involved in ABET accreditation and assessment at the department level
for over 15 years.

18
Discover more.
ncees.org

You might also like